2026-03-31T19:07:47.509 INFO:root:teuthology version: 1.2.4.dev37+ga59626679 2026-03-31T19:07:47.514 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-31T19:07:47.532 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340 branch: tentacle description: rados/singleton/{all/ec-esb-fio mon_election/classic msgr-failures/few msgr/async-v1only objectstore/{bluestore/{alloc$/{avl} base mem$/{normal-1} onode-segment$/{512K} write$/{random/{compr$/{no$/{no}} random}}}} rados supported-random-distro$/{ubuntu_latest}} email: null first_in_suite: false flavor: default job_id: '4340' ktype: distro last_in_suite: false machine_type: vps meta: - desc: all/ec-esb-fio name: kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps no_nested_subset: false openstack: - volumes: count: 6 size: 20 os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: tentacle ansible.cephlab: branch: main repo: https://github.com/kshtsk/ceph-cm-ansible.git skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: logical_volumes: lv_1: scratch_dev: true size: 25%VG vg: vg_nvme lv_2: scratch_dev: true size: 25%VG vg: vg_nvme lv_3: scratch_dev: true size: 25%VG vg: vg_nvme lv_4: scratch_dev: true size: 25%VG vg: vg_nvme timezone: UTC volume_groups: vg_nvme: pvs: /dev/vdb,/dev/vdc,/dev/vdd,/dev/vde ceph: conf: global: mon client directed command retry: 5 mon election default strategy: 1 ms bind msgr2: false ms inject socket failures: 5000 ms type: async mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 mon scrub interval: 300 osd: bdev async discard: true bdev enable discard: true bluestore allocator: avl bluestore block size: 96636764160 bluestore debug extent map encode check: true bluestore fsck on mount: true bluestore onode segment size: 512K bluestore write v2: false bluestore write v2 random: true bluestore zero block detection: true bluestore_elastic_shared_blobs: true debug bluefs: 20 debug bluestore: 20 debug ms: 1 debug osd: 5 debug rocksdb: 10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd debug verify cached snaps: true osd debug verify missing on start: true osd failsafe full ratio: 0.95 osd mclock iops capacity threshold hdd: 49000 osd mclock override recovery settings: true osd mclock profile: high_recovery_ops osd mclock skip benchmark: true osd memory target: 939524096 osd objectstore: bluestore osd op queue: debug_random osd op queue cut off: debug_random flavor: default fs: xfs log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - \(OSD_SLOW_PING_TIME - \(MON_DOWN\) mon_bind_msgr2: false sha1: 5bb3278730741031382ca9c3dc9d221a942e06a2 ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_binary_url: https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm install: ceph: flavor: default sha1: 5bb3278730741031382ca9c3dc9d221a942e06a2 extra_system_packages: deb: - python3-jmespath - python3-xmltodict - s3cmd rpm: - bzip2 - perl-Test-Harness - python3-jmespath - python3-xmltodict - s3cmd thrashosds: bdev_inject_crash: 2 bdev_inject_crash_probability: 0.5 workunit: branch: tt-tentacle sha1: 0392f78529848ec72469e8e431875cb98d3a5fb4 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mgr.x - client.0 - - osd.0 - osd.1 - - osd.2 - osd.3 - - osd.4 - osd.5 seed: 6407 sha1: 5bb3278730741031382ca9c3dc9d221a942e06a2 sleep_before_teardown: 0 subset: 1/100000 suite: rados suite_branch: tt-tentacle suite_path: /home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 0392f78529848ec72469e8e431875cb98d3a5fb4 targets: vm01.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLd69V00xYC2CMtkKaj3kAPlLI99FmnqsYl0RoH4t9jdwc9wliMTIlX+q+JRc9A8cvWVYXXkUC885ro/3uByaFw= vm03.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBjJZoFckQT4dMqZz/UV7jOh0mm6AYkzTJa/zbNkN6aRKmLm7fj3mGn+TBrHKZKJjLUE5Ywh4LcJZCjUtHwKxZ0= vm05.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEr7pI7+qw3uyso88tkHOcY44shjJVyBxyGVHuDaDi1snWaUNYFW1Mw6qL6DCC197hl1o16I3jGW5Tn5sI38Di0= vm06.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBObccJMZykEKQN0ju0OECDNla2291TGoFMM9toqCbCry/ymSnjIDSPLVXHJlRrjNZjahAORFCX4F3VNyDi3+Wps= tasks: - install: null - ceph: log-ignorelist: - \(POOL_APP_NOT_ENABLED\) - \(OSDMAP_FLAGS\) - \(OSD_ - \(OBJECT_ - \(PG_ - \(SLOW_OPS\) - overall HEALTH - slow request - workunit: clients: client.0: - rados/ec-esb-fio.sh teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: uv2 teuthology_repo: https://github.com/kshtsk/teuthology teuthology_sha1: a59626679648f962bca99d20d35578f2998c8f37 timestamp: 2026-03-31_11:18:10 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.282426 2026-03-31T19:07:47.532 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa; will attempt to use it 2026-03-31T19:07:47.532 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa/tasks 2026-03-31T19:07:47.532 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-31T19:07:47.532 INFO:teuthology.task.internal:Checking packages... 2026-03-31T19:07:47.532 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash '5bb3278730741031382ca9c3dc9d221a942e06a2' 2026-03-31T19:07:47.532 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-31T19:07:47.532 INFO:teuthology.packaging:ref: None 2026-03-31T19:07:47.532 INFO:teuthology.packaging:tag: None 2026-03-31T19:07:47.532 INFO:teuthology.packaging:branch: tentacle 2026-03-31T19:07:47.532 INFO:teuthology.packaging:sha1: 5bb3278730741031382ca9c3dc9d221a942e06a2 2026-03-31T19:07:47.532 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=tentacle 2026-03-31T19:07:48.338 INFO:teuthology.task.internal:Found packages for ceph version 20.2.0-714-g147f7c6a-1jammy 2026-03-31T19:07:48.339 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-31T19:07:48.339 INFO:teuthology.task.internal:no buildpackages task found 2026-03-31T19:07:48.339 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-31T19:07:48.340 INFO:teuthology.task.internal:Saving configuration 2026-03-31T19:07:48.345 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-31T19:07:48.346 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-31T19:07:48.352 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm01.local', 'description': '/archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-31 19:05:39.193769', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:01', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLd69V00xYC2CMtkKaj3kAPlLI99FmnqsYl0RoH4t9jdwc9wliMTIlX+q+JRc9A8cvWVYXXkUC885ro/3uByaFw='} 2026-03-31T19:07:48.357 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm03.local', 'description': '/archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-31 19:05:39.193976', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:03', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBjJZoFckQT4dMqZz/UV7jOh0mm6AYkzTJa/zbNkN6aRKmLm7fj3mGn+TBrHKZKJjLUE5Ywh4LcJZCjUtHwKxZ0='} 2026-03-31T19:07:48.361 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm05.local', 'description': '/archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-31 19:05:39.193094', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:05', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEr7pI7+qw3uyso88tkHOcY44shjJVyBxyGVHuDaDi1snWaUNYFW1Mw6qL6DCC197hl1o16I3jGW5Tn5sI38Di0='} 2026-03-31T19:07:48.365 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm06.local', 'description': '/archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-31 19:05:39.193545', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:06', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBObccJMZykEKQN0ju0OECDNla2291TGoFMM9toqCbCry/ymSnjIDSPLVXHJlRrjNZjahAORFCX4F3VNyDi3+Wps='} 2026-03-31T19:07:48.365 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-31T19:07:48.366 INFO:teuthology.task.internal:roles: ubuntu@vm01.local - ['mon.a', 'mgr.x', 'client.0'] 2026-03-31T19:07:48.366 INFO:teuthology.task.internal:roles: ubuntu@vm03.local - ['osd.0', 'osd.1'] 2026-03-31T19:07:48.366 INFO:teuthology.task.internal:roles: ubuntu@vm05.local - ['osd.2', 'osd.3'] 2026-03-31T19:07:48.366 INFO:teuthology.task.internal:roles: ubuntu@vm06.local - ['osd.4', 'osd.5'] 2026-03-31T19:07:48.366 INFO:teuthology.run_tasks:Running task console_log... 2026-03-31T19:07:48.396 DEBUG:teuthology.task.console_log:vm01 does not support IPMI; excluding 2026-03-31T19:07:48.401 DEBUG:teuthology.task.console_log:vm03 does not support IPMI; excluding 2026-03-31T19:07:48.405 DEBUG:teuthology.task.console_log:vm05 does not support IPMI; excluding 2026-03-31T19:07:48.410 DEBUG:teuthology.task.console_log:vm06 does not support IPMI; excluding 2026-03-31T19:07:48.410 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fc1d6959870>, signals=[15]) 2026-03-31T19:07:48.410 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-31T19:07:48.411 INFO:teuthology.task.internal:Opening connections... 2026-03-31T19:07:48.411 DEBUG:teuthology.task.internal:connecting to ubuntu@vm01.local 2026-03-31T19:07:48.411 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-31T19:07:48.473 DEBUG:teuthology.task.internal:connecting to ubuntu@vm03.local 2026-03-31T19:07:48.473 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-31T19:07:48.534 DEBUG:teuthology.task.internal:connecting to ubuntu@vm05.local 2026-03-31T19:07:48.534 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-31T19:07:48.594 DEBUG:teuthology.task.internal:connecting to ubuntu@vm06.local 2026-03-31T19:07:48.595 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-31T19:07:48.659 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-31T19:07:48.660 DEBUG:teuthology.orchestra.run.vm01:> uname -m 2026-03-31T19:07:48.663 INFO:teuthology.orchestra.run.vm01.stdout:x86_64 2026-03-31T19:07:48.663 DEBUG:teuthology.orchestra.run.vm01:> cat /etc/os-release 2026-03-31T19:07:48.705 INFO:teuthology.orchestra.run.vm01.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-31T19:07:48.705 INFO:teuthology.orchestra.run.vm01.stdout:NAME="Ubuntu" 2026-03-31T19:07:48.705 INFO:teuthology.orchestra.run.vm01.stdout:VERSION_ID="22.04" 2026-03-31T19:07:48.706 INFO:teuthology.orchestra.run.vm01.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-31T19:07:48.706 INFO:teuthology.orchestra.run.vm01.stdout:VERSION_CODENAME=jammy 2026-03-31T19:07:48.706 INFO:teuthology.orchestra.run.vm01.stdout:ID=ubuntu 2026-03-31T19:07:48.706 INFO:teuthology.orchestra.run.vm01.stdout:ID_LIKE=debian 2026-03-31T19:07:48.706 INFO:teuthology.orchestra.run.vm01.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-31T19:07:48.706 INFO:teuthology.orchestra.run.vm01.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-31T19:07:48.706 INFO:teuthology.orchestra.run.vm01.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-31T19:07:48.706 INFO:teuthology.orchestra.run.vm01.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-31T19:07:48.706 INFO:teuthology.orchestra.run.vm01.stdout:UBUNTU_CODENAME=jammy 2026-03-31T19:07:48.706 INFO:teuthology.lock.ops:Updating vm01.local on lock server 2026-03-31T19:07:48.711 DEBUG:teuthology.orchestra.run.vm03:> uname -m 2026-03-31T19:07:48.714 INFO:teuthology.orchestra.run.vm03.stdout:x86_64 2026-03-31T19:07:48.714 DEBUG:teuthology.orchestra.run.vm03:> cat /etc/os-release 2026-03-31T19:07:48.760 INFO:teuthology.orchestra.run.vm03.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-31T19:07:48.760 INFO:teuthology.orchestra.run.vm03.stdout:NAME="Ubuntu" 2026-03-31T19:07:48.760 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_ID="22.04" 2026-03-31T19:07:48.760 INFO:teuthology.orchestra.run.vm03.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-31T19:07:48.760 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_CODENAME=jammy 2026-03-31T19:07:48.760 INFO:teuthology.orchestra.run.vm03.stdout:ID=ubuntu 2026-03-31T19:07:48.760 INFO:teuthology.orchestra.run.vm03.stdout:ID_LIKE=debian 2026-03-31T19:07:48.760 INFO:teuthology.orchestra.run.vm03.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-31T19:07:48.760 INFO:teuthology.orchestra.run.vm03.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-31T19:07:48.760 INFO:teuthology.orchestra.run.vm03.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-31T19:07:48.760 INFO:teuthology.orchestra.run.vm03.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-31T19:07:48.760 INFO:teuthology.orchestra.run.vm03.stdout:UBUNTU_CODENAME=jammy 2026-03-31T19:07:48.760 INFO:teuthology.lock.ops:Updating vm03.local on lock server 2026-03-31T19:07:48.765 DEBUG:teuthology.orchestra.run.vm05:> uname -m 2026-03-31T19:07:48.768 INFO:teuthology.orchestra.run.vm05.stdout:x86_64 2026-03-31T19:07:48.768 DEBUG:teuthology.orchestra.run.vm05:> cat /etc/os-release 2026-03-31T19:07:48.814 INFO:teuthology.orchestra.run.vm05.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-31T19:07:48.814 INFO:teuthology.orchestra.run.vm05.stdout:NAME="Ubuntu" 2026-03-31T19:07:48.814 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_ID="22.04" 2026-03-31T19:07:48.814 INFO:teuthology.orchestra.run.vm05.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-31T19:07:48.814 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_CODENAME=jammy 2026-03-31T19:07:48.814 INFO:teuthology.orchestra.run.vm05.stdout:ID=ubuntu 2026-03-31T19:07:48.814 INFO:teuthology.orchestra.run.vm05.stdout:ID_LIKE=debian 2026-03-31T19:07:48.814 INFO:teuthology.orchestra.run.vm05.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-31T19:07:48.814 INFO:teuthology.orchestra.run.vm05.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-31T19:07:48.814 INFO:teuthology.orchestra.run.vm05.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-31T19:07:48.814 INFO:teuthology.orchestra.run.vm05.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-31T19:07:48.814 INFO:teuthology.orchestra.run.vm05.stdout:UBUNTU_CODENAME=jammy 2026-03-31T19:07:48.814 INFO:teuthology.lock.ops:Updating vm05.local on lock server 2026-03-31T19:07:48.818 DEBUG:teuthology.orchestra.run.vm06:> uname -m 2026-03-31T19:07:48.822 INFO:teuthology.orchestra.run.vm06.stdout:x86_64 2026-03-31T19:07:48.822 DEBUG:teuthology.orchestra.run.vm06:> cat /etc/os-release 2026-03-31T19:07:48.868 INFO:teuthology.orchestra.run.vm06.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-31T19:07:48.868 INFO:teuthology.orchestra.run.vm06.stdout:NAME="Ubuntu" 2026-03-31T19:07:48.868 INFO:teuthology.orchestra.run.vm06.stdout:VERSION_ID="22.04" 2026-03-31T19:07:48.868 INFO:teuthology.orchestra.run.vm06.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-31T19:07:48.868 INFO:teuthology.orchestra.run.vm06.stdout:VERSION_CODENAME=jammy 2026-03-31T19:07:48.868 INFO:teuthology.orchestra.run.vm06.stdout:ID=ubuntu 2026-03-31T19:07:48.868 INFO:teuthology.orchestra.run.vm06.stdout:ID_LIKE=debian 2026-03-31T19:07:48.868 INFO:teuthology.orchestra.run.vm06.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-31T19:07:48.868 INFO:teuthology.orchestra.run.vm06.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-31T19:07:48.868 INFO:teuthology.orchestra.run.vm06.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-31T19:07:48.868 INFO:teuthology.orchestra.run.vm06.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-31T19:07:48.868 INFO:teuthology.orchestra.run.vm06.stdout:UBUNTU_CODENAME=jammy 2026-03-31T19:07:48.868 INFO:teuthology.lock.ops:Updating vm06.local on lock server 2026-03-31T19:07:48.872 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-31T19:07:48.874 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-31T19:07:48.875 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-31T19:07:48.875 DEBUG:teuthology.orchestra.run.vm01:> test '!' -e /home/ubuntu/cephtest 2026-03-31T19:07:48.876 DEBUG:teuthology.orchestra.run.vm03:> test '!' -e /home/ubuntu/cephtest 2026-03-31T19:07:48.877 DEBUG:teuthology.orchestra.run.vm05:> test '!' -e /home/ubuntu/cephtest 2026-03-31T19:07:48.878 DEBUG:teuthology.orchestra.run.vm06:> test '!' -e /home/ubuntu/cephtest 2026-03-31T19:07:48.911 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-31T19:07:48.912 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-31T19:07:48.912 DEBUG:teuthology.orchestra.run.vm01:> test -z $(ls -A /var/lib/ceph) 2026-03-31T19:07:48.920 DEBUG:teuthology.orchestra.run.vm03:> test -z $(ls -A /var/lib/ceph) 2026-03-31T19:07:48.922 DEBUG:teuthology.orchestra.run.vm05:> test -z $(ls -A /var/lib/ceph) 2026-03-31T19:07:48.922 INFO:teuthology.orchestra.run.vm01.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-31T19:07:48.924 DEBUG:teuthology.orchestra.run.vm06:> test -z $(ls -A /var/lib/ceph) 2026-03-31T19:07:48.924 INFO:teuthology.orchestra.run.vm03.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-31T19:07:48.925 INFO:teuthology.orchestra.run.vm05.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-31T19:07:48.956 INFO:teuthology.orchestra.run.vm06.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-31T19:07:48.957 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-31T19:07:48.964 DEBUG:teuthology.orchestra.run.vm01:> test -e /ceph-qa-ready 2026-03-31T19:07:48.966 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:07:49.203 DEBUG:teuthology.orchestra.run.vm03:> test -e /ceph-qa-ready 2026-03-31T19:07:49.205 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:07:49.442 DEBUG:teuthology.orchestra.run.vm05:> test -e /ceph-qa-ready 2026-03-31T19:07:49.445 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:07:49.674 DEBUG:teuthology.orchestra.run.vm06:> test -e /ceph-qa-ready 2026-03-31T19:07:49.677 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:07:49.903 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-31T19:07:49.905 INFO:teuthology.task.internal:Creating test directory... 2026-03-31T19:07:49.905 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-31T19:07:49.906 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-31T19:07:49.907 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-31T19:07:49.908 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-31T19:07:49.910 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-31T19:07:49.912 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-31T19:07:49.913 INFO:teuthology.task.internal:Creating archive directory... 2026-03-31T19:07:49.913 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-31T19:07:49.952 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-31T19:07:49.953 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-31T19:07:49.954 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-31T19:07:49.959 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-31T19:07:49.960 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-31T19:07:49.960 DEBUG:teuthology.orchestra.run.vm01:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-31T19:07:49.997 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:07:49.997 DEBUG:teuthology.orchestra.run.vm03:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-31T19:07:50.000 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:07:50.000 DEBUG:teuthology.orchestra.run.vm05:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-31T19:07:50.002 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:07:50.002 DEBUG:teuthology.orchestra.run.vm06:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-31T19:07:50.004 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:07:50.005 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-31T19:07:50.040 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-31T19:07:50.042 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-31T19:07:50.044 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-31T19:07:50.047 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-31T19:07:50.049 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-31T19:07:50.050 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-31T19:07:50.051 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-31T19:07:50.054 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-31T19:07:50.055 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-31T19:07:50.055 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-31T19:07:50.058 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-31T19:07:50.059 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-31T19:07:50.061 INFO:teuthology.task.internal:Configuring sudo... 2026-03-31T19:07:50.061 DEBUG:teuthology.orchestra.run.vm01:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-31T19:07:50.096 DEBUG:teuthology.orchestra.run.vm03:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-31T19:07:50.098 DEBUG:teuthology.orchestra.run.vm05:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-31T19:07:50.100 DEBUG:teuthology.orchestra.run.vm06:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-31T19:07:50.112 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-31T19:07:50.115 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-31T19:07:50.115 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-31T19:07:50.148 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-31T19:07:50.149 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-31T19:07:50.150 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-31T19:07:50.156 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-31T19:07:50.194 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-31T19:07:50.238 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:07:50.238 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-31T19:07:50.286 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-31T19:07:50.290 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-31T19:07:50.340 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-31T19:07:50.340 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-31T19:07:50.388 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-31T19:07:50.392 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-31T19:07:50.438 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-31T19:07:50.438 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-31T19:07:50.486 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-31T19:07:50.490 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-31T19:07:50.535 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-31T19:07:50.535 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-31T19:07:50.584 DEBUG:teuthology.orchestra.run.vm01:> sudo service rsyslog restart 2026-03-31T19:07:50.585 DEBUG:teuthology.orchestra.run.vm03:> sudo service rsyslog restart 2026-03-31T19:07:50.586 DEBUG:teuthology.orchestra.run.vm05:> sudo service rsyslog restart 2026-03-31T19:07:50.587 DEBUG:teuthology.orchestra.run.vm06:> sudo service rsyslog restart 2026-03-31T19:07:50.639 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-31T19:07:50.641 INFO:teuthology.task.internal:Starting timer... 2026-03-31T19:07:50.641 INFO:teuthology.run_tasks:Running task pcp... 2026-03-31T19:07:50.644 INFO:teuthology.run_tasks:Running task selinux... 2026-03-31T19:07:50.646 INFO:teuthology.task.selinux:Excluding vm01: VMs are not yet supported 2026-03-31T19:07:50.646 INFO:teuthology.task.selinux:Excluding vm03: VMs are not yet supported 2026-03-31T19:07:50.646 INFO:teuthology.task.selinux:Excluding vm05: VMs are not yet supported 2026-03-31T19:07:50.646 INFO:teuthology.task.selinux:Excluding vm06: VMs are not yet supported 2026-03-31T19:07:50.646 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-31T19:07:50.646 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-31T19:07:50.646 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-31T19:07:50.646 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-31T19:07:50.648 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}} 2026-03-31T19:07:50.648 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/kshtsk/ceph-cm-ansible.git 2026-03-31T19:07:50.649 INFO:teuthology.repo_utils:Fetching github.com_kshtsk_ceph-cm-ansible_main from origin 2026-03-31T19:07:51.271 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main to origin/main 2026-03-31T19:07:51.276 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-31T19:07:51.276 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "logical_volumes": {"lv_1": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_2": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_3": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_4": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}}, "timezone": "UTC", "volume_groups": {"vg_nvme": {"pvs": "/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde"}}}' -i /tmp/teuth_ansible_inventoryubw57vaa --limit vm01.local,vm03.local,vm05.local,vm06.local /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-31T19:10:12.427 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm01.local'), Remote(name='ubuntu@vm03.local'), Remote(name='ubuntu@vm05.local'), Remote(name='ubuntu@vm06.local')] 2026-03-31T19:10:12.428 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm01.local' 2026-03-31T19:10:12.428 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-31T19:10:12.489 DEBUG:teuthology.orchestra.run.vm01:> true 2026-03-31T19:10:12.704 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm01.local' 2026-03-31T19:10:12.705 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm03.local' 2026-03-31T19:10:12.705 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-31T19:10:12.764 DEBUG:teuthology.orchestra.run.vm03:> true 2026-03-31T19:10:12.984 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm03.local' 2026-03-31T19:10:12.985 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm05.local' 2026-03-31T19:10:12.985 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-31T19:10:13.044 DEBUG:teuthology.orchestra.run.vm05:> true 2026-03-31T19:10:13.264 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm05.local' 2026-03-31T19:10:13.265 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm06.local' 2026-03-31T19:10:13.265 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-31T19:10:13.326 DEBUG:teuthology.orchestra.run.vm06:> true 2026-03-31T19:10:13.540 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm06.local' 2026-03-31T19:10:13.540 INFO:teuthology.run_tasks:Running task clock... 2026-03-31T19:10:13.543 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-31T19:10:13.543 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-31T19:10:13.543 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-31T19:10:13.544 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-31T19:10:13.544 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-31T19:10:13.545 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-31T19:10:13.545 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-31T19:10:13.546 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-31T19:10:13.546 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-31T19:10:13.561 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-31T19:10:13.561 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: Command line: ntpd -gq 2026-03-31T19:10:13.561 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: ---------------------------------------------------- 2026-03-31T19:10:13.561 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: ntp-4 is maintained by Network Time Foundation, 2026-03-31T19:10:13.561 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-31T19:10:13.561 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: corporation. Support and training for ntp-4 are 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: Command line: ntpd -gq 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: ---------------------------------------------------- 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: ntp-4 is maintained by Network Time Foundation, 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: corporation. Support and training for ntp-4 are 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: available at https://www.nwtime.org/support 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: ---------------------------------------------------- 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: proto: precision = 0.029 usec (-25) 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: basedate set to 2022-02-04 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: gps base set to 2022-02-06 (week 2196) 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm03.stderr:31 Mar 19:10:13 ntpd[16221]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 94 days ago 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: available at https://www.nwtime.org/support 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: ---------------------------------------------------- 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: proto: precision = 0.030 usec (-25) 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: basedate set to 2022-02-04 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: gps base set to 2022-02-06 (week 2196) 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-31T19:10:13.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm01.stderr:31 Mar 19:10:13 ntpd[16233]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 94 days ago 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: Listen and drop on 0 v6wildcard [::]:123 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: Listen normally on 2 lo 127.0.0.1:123 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: Listen normally on 3 ens3 192.168.123.103:123 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: Listen normally on 4 lo [::1]:123 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:3%2]:123 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:13 ntpd[16221]: Listening on routing socket on fd #22 for interface updates 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: Command line: ntpd -gq 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: ---------------------------------------------------- 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: ntp-4 is maintained by Network Time Foundation, 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: corporation. Support and training for ntp-4 are 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: available at https://www.nwtime.org/support 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: ---------------------------------------------------- 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: proto: precision = 0.030 usec (-25) 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: basedate set to 2022-02-04 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: gps base set to 2022-02-06 (week 2196) 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm05.stderr:31 Mar 19:10:13 ntpd[16246]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 94 days ago 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: Listen and drop on 0 v6wildcard [::]:123 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: Listen normally on 2 lo 127.0.0.1:123 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: Listen normally on 3 ens3 192.168.123.101:123 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: Listen normally on 4 lo [::1]:123 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:1%2]:123 2026-03-31T19:10:13.563 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:13 ntpd[16233]: Listening on routing socket on fd #22 for interface updates 2026-03-31T19:10:13.564 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: Listen and drop on 0 v6wildcard [::]:123 2026-03-31T19:10:13.564 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-31T19:10:13.564 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: Listen normally on 2 lo 127.0.0.1:123 2026-03-31T19:10:13.564 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: Listen normally on 3 ens3 192.168.123.105:123 2026-03-31T19:10:13.564 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: Listen normally on 4 lo [::1]:123 2026-03-31T19:10:13.564 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:5%2]:123 2026-03-31T19:10:13.565 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:13 ntpd[16246]: Listening on routing socket on fd #22 for interface updates 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: Command line: ntpd -gq 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: ---------------------------------------------------- 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: ntp-4 is maintained by Network Time Foundation, 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: corporation. Support and training for ntp-4 are 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: available at https://www.nwtime.org/support 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: ---------------------------------------------------- 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: proto: precision = 0.029 usec (-25) 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: basedate set to 2022-02-04 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: gps base set to 2022-02-06 (week 2196) 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-31T19:10:13.596 INFO:teuthology.orchestra.run.vm06.stderr:31 Mar 19:10:13 ntpd[16244]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 94 days ago 2026-03-31T19:10:13.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: Listen and drop on 0 v6wildcard [::]:123 2026-03-31T19:10:13.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-31T19:10:13.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: Listen normally on 2 lo 127.0.0.1:123 2026-03-31T19:10:13.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: Listen normally on 3 ens3 192.168.123.106:123 2026-03-31T19:10:13.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: Listen normally on 4 lo [::1]:123 2026-03-31T19:10:13.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:6%2]:123 2026-03-31T19:10:13.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:13 ntpd[16244]: Listening on routing socket on fd #22 for interface updates 2026-03-31T19:10:14.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:14 ntpd[16221]: Soliciting pool server 136.243.177.133 2026-03-31T19:10:14.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:14 ntpd[16233]: Soliciting pool server 136.243.177.133 2026-03-31T19:10:14.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:14 ntpd[16246]: Soliciting pool server 88.99.86.9 2026-03-31T19:10:14.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:14 ntpd[16244]: Soliciting pool server 93.241.86.156 2026-03-31T19:10:15.561 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:15 ntpd[16221]: Soliciting pool server 173.249.58.145 2026-03-31T19:10:15.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:15 ntpd[16233]: Soliciting pool server 173.249.58.145 2026-03-31T19:10:15.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:15 ntpd[16221]: Soliciting pool server 178.63.67.56 2026-03-31T19:10:15.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:15 ntpd[16233]: Soliciting pool server 178.63.67.56 2026-03-31T19:10:15.562 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:15 ntpd[16246]: Soliciting pool server 136.243.177.133 2026-03-31T19:10:15.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:15 ntpd[16246]: Soliciting pool server 159.69.64.189 2026-03-31T19:10:15.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:15 ntpd[16244]: Soliciting pool server 88.99.86.9 2026-03-31T19:10:15.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:15 ntpd[16244]: Soliciting pool server 217.160.19.219 2026-03-31T19:10:16.561 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:16 ntpd[16221]: Soliciting pool server 144.76.66.156 2026-03-31T19:10:16.561 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:16 ntpd[16221]: Soliciting pool server 93.241.86.156 2026-03-31T19:10:16.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:16 ntpd[16233]: Soliciting pool server 144.76.66.156 2026-03-31T19:10:16.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:16 ntpd[16233]: Soliciting pool server 93.241.86.156 2026-03-31T19:10:16.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:16 ntpd[16221]: Soliciting pool server 212.132.108.186 2026-03-31T19:10:16.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:16 ntpd[16233]: Soliciting pool server 212.132.108.186 2026-03-31T19:10:16.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:16 ntpd[16246]: Soliciting pool server 178.63.67.56 2026-03-31T19:10:16.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:16 ntpd[16246]: Soliciting pool server 173.249.58.145 2026-03-31T19:10:16.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:16 ntpd[16246]: Soliciting pool server 141.144.246.224 2026-03-31T19:10:16.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:16 ntpd[16244]: Soliciting pool server 159.69.64.189 2026-03-31T19:10:16.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:16 ntpd[16244]: Soliciting pool server 136.243.177.133 2026-03-31T19:10:16.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:16 ntpd[16244]: Soliciting pool server 176.9.157.155 2026-03-31T19:10:17.561 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:17 ntpd[16221]: Soliciting pool server 148.251.5.46 2026-03-31T19:10:17.561 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:17 ntpd[16221]: Soliciting pool server 217.160.19.219 2026-03-31T19:10:17.561 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:17 ntpd[16221]: Soliciting pool server 88.99.86.9 2026-03-31T19:10:17.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:17 ntpd[16233]: Soliciting pool server 148.251.5.46 2026-03-31T19:10:17.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:17 ntpd[16233]: Soliciting pool server 217.160.19.219 2026-03-31T19:10:17.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:17 ntpd[16233]: Soliciting pool server 88.99.86.9 2026-03-31T19:10:17.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:17 ntpd[16221]: Soliciting pool server 176.9.44.212 2026-03-31T19:10:17.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:17 ntpd[16233]: Soliciting pool server 176.9.44.212 2026-03-31T19:10:17.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:17 ntpd[16246]: Soliciting pool server 212.132.108.186 2026-03-31T19:10:17.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:17 ntpd[16246]: Soliciting pool server 144.76.66.156 2026-03-31T19:10:17.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:17 ntpd[16246]: Soliciting pool server 176.9.44.212 2026-03-31T19:10:17.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:17 ntpd[16244]: Soliciting pool server 141.144.246.224 2026-03-31T19:10:17.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:17 ntpd[16244]: Soliciting pool server 178.63.67.56 2026-03-31T19:10:17.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:17 ntpd[16244]: Soliciting pool server 173.249.58.145 2026-03-31T19:10:17.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:17 ntpd[16244]: Soliciting pool server 178.63.9.212 2026-03-31T19:10:18.561 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:18 ntpd[16221]: Soliciting pool server 202.61.195.221 2026-03-31T19:10:18.561 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:18 ntpd[16221]: Soliciting pool server 176.9.157.155 2026-03-31T19:10:18.561 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:18 ntpd[16221]: Soliciting pool server 159.69.64.189 2026-03-31T19:10:18.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:18 ntpd[16233]: Soliciting pool server 202.61.195.221 2026-03-31T19:10:18.562 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:18 ntpd[16221]: Soliciting pool server 185.125.190.56 2026-03-31T19:10:18.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:18 ntpd[16233]: Soliciting pool server 176.9.157.155 2026-03-31T19:10:18.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:18 ntpd[16233]: Soliciting pool server 159.69.64.189 2026-03-31T19:10:18.563 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:18 ntpd[16233]: Soliciting pool server 185.125.190.56 2026-03-31T19:10:18.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:18 ntpd[16246]: Soliciting pool server 202.61.195.221 2026-03-31T19:10:18.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:18 ntpd[16246]: Soliciting pool server 148.251.5.46 2026-03-31T19:10:18.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:18 ntpd[16246]: Soliciting pool server 217.160.19.219 2026-03-31T19:10:18.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:18 ntpd[16246]: Soliciting pool server 91.189.91.157 2026-03-31T19:10:18.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:18 ntpd[16244]: Soliciting pool server 176.9.44.212 2026-03-31T19:10:18.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:18 ntpd[16244]: Soliciting pool server 212.132.108.186 2026-03-31T19:10:18.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:18 ntpd[16244]: Soliciting pool server 144.76.66.156 2026-03-31T19:10:18.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:18 ntpd[16244]: Soliciting pool server 185.125.190.57 2026-03-31T19:10:19.561 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:19 ntpd[16221]: Soliciting pool server 185.125.190.58 2026-03-31T19:10:19.561 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:19 ntpd[16221]: Soliciting pool server 217.217.243.78 2026-03-31T19:10:19.561 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:19 ntpd[16221]: Soliciting pool server 141.144.246.224 2026-03-31T19:10:19.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:19 ntpd[16233]: Soliciting pool server 185.125.190.58 2026-03-31T19:10:19.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:19 ntpd[16233]: Soliciting pool server 217.217.243.78 2026-03-31T19:10:19.562 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:19 ntpd[16233]: Soliciting pool server 141.144.246.224 2026-03-31T19:10:19.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:19 ntpd[16246]: Soliciting pool server 185.125.190.56 2026-03-31T19:10:19.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:19 ntpd[16246]: Soliciting pool server 217.217.243.78 2026-03-31T19:10:19.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:19 ntpd[16246]: Soliciting pool server 176.9.157.155 2026-03-31T19:10:19.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:19 ntpd[16244]: Soliciting pool server 91.189.91.157 2026-03-31T19:10:19.596 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:19 ntpd[16244]: Soliciting pool server 202.61.195.221 2026-03-31T19:10:19.597 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:19 ntpd[16244]: Soliciting pool server 148.251.5.46 2026-03-31T19:10:20.562 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:20 ntpd[16246]: Soliciting pool server 185.125.190.58 2026-03-31T19:10:20.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:20 ntpd[16246]: Soliciting pool server 178.63.9.212 2026-03-31T19:10:20.563 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:20 ntpd[16246]: Soliciting pool server 2a13:9a40:f825:3::f401:bc6 2026-03-31T19:10:21.636 INFO:teuthology.orchestra.run.vm06.stdout:31 Mar 19:10:21 ntpd[16244]: ntpd: time slew -0.007394 s 2026-03-31T19:10:21.636 INFO:teuthology.orchestra.run.vm06.stdout:ntpd: time slew -0.007394s 2026-03-31T19:10:21.657 INFO:teuthology.orchestra.run.vm06.stdout: remote refid st t when poll reach delay offset jitter 2026-03-31T19:10:21.657 INFO:teuthology.orchestra.run.vm06.stdout:============================================================================== 2026-03-31T19:10:21.657 INFO:teuthology.orchestra.run.vm06.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:21.657 INFO:teuthology.orchestra.run.vm06.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:21.657 INFO:teuthology.orchestra.run.vm06.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:21.657 INFO:teuthology.orchestra.run.vm06.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:21.657 INFO:teuthology.orchestra.run.vm06.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.584 INFO:teuthology.orchestra.run.vm05.stdout:31 Mar 19:10:23 ntpd[16246]: ntpd: time slew +0.000463 s 2026-03-31T19:10:23.584 INFO:teuthology.orchestra.run.vm05.stdout:ntpd: time slew +0.000463s 2026-03-31T19:10:23.586 INFO:teuthology.orchestra.run.vm03.stdout:31 Mar 19:10:23 ntpd[16221]: ntpd: time slew -0.000294 s 2026-03-31T19:10:23.586 INFO:teuthology.orchestra.run.vm03.stdout:ntpd: time slew -0.000294s 2026-03-31T19:10:23.587 INFO:teuthology.orchestra.run.vm01.stdout:31 Mar 19:10:23 ntpd[16233]: ntpd: time slew -0.009431 s 2026-03-31T19:10:23.587 INFO:teuthology.orchestra.run.vm01.stdout:ntpd: time slew -0.009431s 2026-03-31T19:10:23.605 INFO:teuthology.orchestra.run.vm05.stdout: remote refid st t when poll reach delay offset jitter 2026-03-31T19:10:23.605 INFO:teuthology.orchestra.run.vm05.stdout:============================================================================== 2026-03-31T19:10:23.605 INFO:teuthology.orchestra.run.vm05.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.605 INFO:teuthology.orchestra.run.vm05.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.605 INFO:teuthology.orchestra.run.vm05.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.605 INFO:teuthology.orchestra.run.vm05.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.605 INFO:teuthology.orchestra.run.vm05.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.610 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-31T19:10:23.610 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-31T19:10:23.610 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.610 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.610 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.610 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.610 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.611 INFO:teuthology.orchestra.run.vm01.stdout: remote refid st t when poll reach delay offset jitter 2026-03-31T19:10:23.611 INFO:teuthology.orchestra.run.vm01.stdout:============================================================================== 2026-03-31T19:10:23.611 INFO:teuthology.orchestra.run.vm01.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.611 INFO:teuthology.orchestra.run.vm01.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.611 INFO:teuthology.orchestra.run.vm01.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.611 INFO:teuthology.orchestra.run.vm01.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.611 INFO:teuthology.orchestra.run.vm01.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T19:10:23.611 INFO:teuthology.run_tasks:Running task install... 2026-03-31T19:10:23.613 DEBUG:teuthology.task.install:project ceph 2026-03-31T19:10:23.613 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': '5bb3278730741031382ca9c3dc9d221a942e06a2'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-03-31T19:10:23.613 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': '5bb3278730741031382ca9c3dc9d221a942e06a2', 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-03-31T19:10:23.613 INFO:teuthology.task.install:Using flavor: default 2026-03-31T19:10:23.616 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-31T19:10:23.616 INFO:teuthology.task.install:extra packages: [] 2026-03-31T19:10:23.616 DEBUG:teuthology.orchestra.run.vm01:> sudo apt-key list | grep Ceph 2026-03-31T19:10:23.616 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-key list | grep Ceph 2026-03-31T19:10:23.616 DEBUG:teuthology.orchestra.run.vm05:> sudo apt-key list | grep Ceph 2026-03-31T19:10:23.616 DEBUG:teuthology.orchestra.run.vm06:> sudo apt-key list | grep Ceph 2026-03-31T19:10:23.656 INFO:teuthology.orchestra.run.vm06.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-31T19:10:23.678 INFO:teuthology.orchestra.run.vm06.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-31T19:10:23.678 INFO:teuthology.orchestra.run.vm06.stdout:uid [ unknown] Ceph.com (release key) 2026-03-31T19:10:23.679 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-31T19:10:23.679 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-jmespath, python3-xmltodict, s3cmd on remote deb x86_64 2026-03-31T19:10:23.679 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=5bb3278730741031382ca9c3dc9d221a942e06a2 2026-03-31T19:10:23.693 INFO:teuthology.orchestra.run.vm05.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-31T19:10:23.704 INFO:teuthology.orchestra.run.vm01.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-31T19:10:23.706 INFO:teuthology.orchestra.run.vm03.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-31T19:10:23.714 INFO:teuthology.orchestra.run.vm05.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-31T19:10:23.714 INFO:teuthology.orchestra.run.vm05.stdout:uid [ unknown] Ceph.com (release key) 2026-03-31T19:10:23.714 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-31T19:10:23.714 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-jmespath, python3-xmltodict, s3cmd on remote deb x86_64 2026-03-31T19:10:23.714 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=5bb3278730741031382ca9c3dc9d221a942e06a2 2026-03-31T19:10:23.726 INFO:teuthology.orchestra.run.vm01.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-31T19:10:23.726 INFO:teuthology.orchestra.run.vm01.stdout:uid [ unknown] Ceph.com (release key) 2026-03-31T19:10:23.727 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-31T19:10:23.727 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-jmespath, python3-xmltodict, s3cmd on remote deb x86_64 2026-03-31T19:10:23.727 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=5bb3278730741031382ca9c3dc9d221a942e06a2 2026-03-31T19:10:23.790 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-31T19:10:23.790 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph.com (release key) 2026-03-31T19:10:23.790 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-31T19:10:23.790 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-jmespath, python3-xmltodict, s3cmd on remote deb x86_64 2026-03-31T19:10:23.790 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=5bb3278730741031382ca9c3dc9d221a942e06a2 2026-03-31T19:10:24.279 INFO:teuthology.task.install.deb:Pulling from https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default/ 2026-03-31T19:10:24.279 INFO:teuthology.task.install.deb:Package version is 20.2.0-721-g5bb32787-1jammy 2026-03-31T19:10:24.312 INFO:teuthology.task.install.deb:Pulling from https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default/ 2026-03-31T19:10:24.312 INFO:teuthology.task.install.deb:Package version is 20.2.0-721-g5bb32787-1jammy 2026-03-31T19:10:24.381 INFO:teuthology.task.install.deb:Pulling from https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default/ 2026-03-31T19:10:24.382 INFO:teuthology.task.install.deb:Package version is 20.2.0-721-g5bb32787-1jammy 2026-03-31T19:10:24.415 INFO:teuthology.task.install.deb:Pulling from https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default/ 2026-03-31T19:10:24.415 INFO:teuthology.task.install.deb:Package version is 20.2.0-721-g5bb32787-1jammy 2026-03-31T19:10:24.839 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-31T19:10:24.839 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-31T19:10:24.848 DEBUG:teuthology.orchestra.run.vm06:> sudo apt-get update 2026-03-31T19:10:24.855 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:10:24.855 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-31T19:10:24.864 DEBUG:teuthology.orchestra.run.vm01:> sudo apt-get update 2026-03-31T19:10:24.873 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-31T19:10:24.873 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-31T19:10:24.881 DEBUG:teuthology.orchestra.run.vm05:> sudo apt-get update 2026-03-31T19:10:24.960 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-31T19:10:24.960 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-31T19:10:24.963 INFO:teuthology.orchestra.run.vm06.stdout:Hit:1 http://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-31T19:10:24.963 INFO:teuthology.orchestra.run.vm06.stdout:Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-31T19:10:24.970 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-get update 2026-03-31T19:10:24.970 INFO:teuthology.orchestra.run.vm06.stdout:Hit:3 http://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-31T19:10:24.976 INFO:teuthology.orchestra.run.vm01.stdout:Hit:1 http://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-31T19:10:24.979 INFO:teuthology.orchestra.run.vm06.stdout:Hit:4 http://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-31T19:10:25.000 INFO:teuthology.orchestra.run.vm05.stdout:Hit:1 http://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-31T19:10:25.051 INFO:teuthology.orchestra.run.vm05.stdout:Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-31T19:10:25.083 INFO:teuthology.orchestra.run.vm05.stdout:Hit:3 http://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-31T19:10:25.114 INFO:teuthology.orchestra.run.vm05.stdout:Hit:4 http://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-31T19:10:25.173 INFO:teuthology.orchestra.run.vm01.stdout:Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-31T19:10:25.278 INFO:teuthology.orchestra.run.vm03.stdout:Hit:1 http://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-31T19:10:25.282 INFO:teuthology.orchestra.run.vm01.stdout:Hit:3 http://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-31T19:10:25.286 INFO:teuthology.orchestra.run.vm03.stdout:Hit:2 http://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-31T19:10:25.377 INFO:teuthology.orchestra.run.vm03.stdout:Hit:3 http://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-31T19:10:25.392 INFO:teuthology.orchestra.run.vm01.stdout:Hit:4 http://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-31T19:10:25.480 INFO:teuthology.orchestra.run.vm03.stdout:Hit:4 http://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-31T19:10:25.510 INFO:teuthology.orchestra.run.vm01.stdout:Ign:5 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy InRelease 2026-03-31T19:10:25.511 INFO:teuthology.orchestra.run.vm05.stdout:Ign:5 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy InRelease 2026-03-31T19:10:25.518 INFO:teuthology.orchestra.run.vm03.stdout:Ign:5 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy InRelease 2026-03-31T19:10:25.521 INFO:teuthology.orchestra.run.vm06.stdout:Ign:5 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy InRelease 2026-03-31T19:10:25.628 INFO:teuthology.orchestra.run.vm01.stdout:Get:6 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy Release [7689 B] 2026-03-31T19:10:25.628 INFO:teuthology.orchestra.run.vm05.stdout:Get:6 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy Release [7689 B] 2026-03-31T19:10:25.629 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy Release [7689 B] 2026-03-31T19:10:25.640 INFO:teuthology.orchestra.run.vm06.stdout:Get:6 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy Release [7689 B] 2026-03-31T19:10:25.741 INFO:teuthology.orchestra.run.vm03.stdout:Ign:7 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-31T19:10:25.746 INFO:teuthology.orchestra.run.vm01.stdout:Ign:7 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-31T19:10:25.746 INFO:teuthology.orchestra.run.vm05.stdout:Ign:7 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-31T19:10:25.760 INFO:teuthology.orchestra.run.vm06.stdout:Ign:7 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-31T19:10:25.852 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.8 kB] 2026-03-31T19:10:25.863 INFO:teuthology.orchestra.run.vm01.stdout:Get:8 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.8 kB] 2026-03-31T19:10:25.864 INFO:teuthology.orchestra.run.vm05.stdout:Get:8 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.8 kB] 2026-03-31T19:10:25.880 INFO:teuthology.orchestra.run.vm06.stdout:Get:8 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.8 kB] 2026-03-31T19:10:25.931 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 26.5 kB in 1s (33.1 kB/s) 2026-03-31T19:10:25.941 INFO:teuthology.orchestra.run.vm05.stdout:Fetched 26.5 kB in 1s (29.4 kB/s) 2026-03-31T19:10:25.946 INFO:teuthology.orchestra.run.vm01.stdout:Fetched 26.5 kB in 1s (28.6 kB/s) 2026-03-31T19:10:25.956 INFO:teuthology.orchestra.run.vm06.stdout:Fetched 26.5 kB in 1s (27.8 kB/s) 2026-03-31T19:10:26.608 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T19:10:26.609 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T19:10:26.624 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-31T19:10:26.625 DEBUG:teuthology.orchestra.run.vm05:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=20.2.0-721-g5bb32787-1jammy cephadm=20.2.0-721-g5bb32787-1jammy ceph-mds=20.2.0-721-g5bb32787-1jammy ceph-mgr=20.2.0-721-g5bb32787-1jammy ceph-common=20.2.0-721-g5bb32787-1jammy ceph-fuse=20.2.0-721-g5bb32787-1jammy ceph-test=20.2.0-721-g5bb32787-1jammy ceph-volume=20.2.0-721-g5bb32787-1jammy radosgw=20.2.0-721-g5bb32787-1jammy python3-rados=20.2.0-721-g5bb32787-1jammy python3-rgw=20.2.0-721-g5bb32787-1jammy python3-cephfs=20.2.0-721-g5bb32787-1jammy python3-rbd=20.2.0-721-g5bb32787-1jammy libcephfs2=20.2.0-721-g5bb32787-1jammy libcephfs-dev=20.2.0-721-g5bb32787-1jammy librados2=20.2.0-721-g5bb32787-1jammy librbd1=20.2.0-721-g5bb32787-1jammy rbd-fuse=20.2.0-721-g5bb32787-1jammy 2026-03-31T19:10:26.625 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=20.2.0-721-g5bb32787-1jammy cephadm=20.2.0-721-g5bb32787-1jammy ceph-mds=20.2.0-721-g5bb32787-1jammy ceph-mgr=20.2.0-721-g5bb32787-1jammy ceph-common=20.2.0-721-g5bb32787-1jammy ceph-fuse=20.2.0-721-g5bb32787-1jammy ceph-test=20.2.0-721-g5bb32787-1jammy ceph-volume=20.2.0-721-g5bb32787-1jammy radosgw=20.2.0-721-g5bb32787-1jammy python3-rados=20.2.0-721-g5bb32787-1jammy python3-rgw=20.2.0-721-g5bb32787-1jammy python3-cephfs=20.2.0-721-g5bb32787-1jammy python3-rbd=20.2.0-721-g5bb32787-1jammy libcephfs2=20.2.0-721-g5bb32787-1jammy libcephfs-dev=20.2.0-721-g5bb32787-1jammy librados2=20.2.0-721-g5bb32787-1jammy librbd1=20.2.0-721-g5bb32787-1jammy rbd-fuse=20.2.0-721-g5bb32787-1jammy 2026-03-31T19:10:26.637 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T19:10:26.639 DEBUG:teuthology.orchestra.run.vm01:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=20.2.0-721-g5bb32787-1jammy cephadm=20.2.0-721-g5bb32787-1jammy ceph-mds=20.2.0-721-g5bb32787-1jammy ceph-mgr=20.2.0-721-g5bb32787-1jammy ceph-common=20.2.0-721-g5bb32787-1jammy ceph-fuse=20.2.0-721-g5bb32787-1jammy ceph-test=20.2.0-721-g5bb32787-1jammy ceph-volume=20.2.0-721-g5bb32787-1jammy radosgw=20.2.0-721-g5bb32787-1jammy python3-rados=20.2.0-721-g5bb32787-1jammy python3-rgw=20.2.0-721-g5bb32787-1jammy python3-cephfs=20.2.0-721-g5bb32787-1jammy python3-rbd=20.2.0-721-g5bb32787-1jammy libcephfs2=20.2.0-721-g5bb32787-1jammy libcephfs-dev=20.2.0-721-g5bb32787-1jammy librados2=20.2.0-721-g5bb32787-1jammy librbd1=20.2.0-721-g5bb32787-1jammy rbd-fuse=20.2.0-721-g5bb32787-1jammy 2026-03-31T19:10:26.650 DEBUG:teuthology.orchestra.run.vm06:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=20.2.0-721-g5bb32787-1jammy cephadm=20.2.0-721-g5bb32787-1jammy ceph-mds=20.2.0-721-g5bb32787-1jammy ceph-mgr=20.2.0-721-g5bb32787-1jammy ceph-common=20.2.0-721-g5bb32787-1jammy ceph-fuse=20.2.0-721-g5bb32787-1jammy ceph-test=20.2.0-721-g5bb32787-1jammy ceph-volume=20.2.0-721-g5bb32787-1jammy radosgw=20.2.0-721-g5bb32787-1jammy python3-rados=20.2.0-721-g5bb32787-1jammy python3-rgw=20.2.0-721-g5bb32787-1jammy python3-cephfs=20.2.0-721-g5bb32787-1jammy python3-rbd=20.2.0-721-g5bb32787-1jammy libcephfs2=20.2.0-721-g5bb32787-1jammy libcephfs-dev=20.2.0-721-g5bb32787-1jammy librados2=20.2.0-721-g5bb32787-1jammy librbd1=20.2.0-721-g5bb32787-1jammy rbd-fuse=20.2.0-721-g5bb32787-1jammy 2026-03-31T19:10:26.663 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T19:10:26.663 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T19:10:26.674 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-31T19:10:26.687 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T19:10:26.866 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T19:10:26.866 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T19:10:26.866 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T19:10:26.866 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T19:10:26.867 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T19:10:26.867 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T19:10:26.879 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-31T19:10:26.879 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-31T19:10:27.018 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T19:10:27.018 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T19:10:27.018 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout:The following additional packages will be installed: 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-daemon libcephfs-proxy2 libdouble-conversion3 libfuse2 libjq1 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: liblttng-ust1 libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 librgw2 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph libthrift-0.16.0 nvme-cli python-asyncssh-doc 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh python3-cachetools python3-ceph-argparse 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: python3-iniconfig python3-jaraco.classes python3-jaraco.collections 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-natsort python3-pluggy python3-portend 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: python3-pytest python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-toml python3-wcwidth 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-zc.lockfile qttranslations5-l10n 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat xmlstarlet 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout:Suggested packages: 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb liblua5.3-dev luarocks python-natsort-doc python-psutil-doc 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: subversion python-pygments-doc ttf-bitstream-vera python3-paste python3-dap 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: python-sklearn-doc ipython3 python-webob-doc gsmartcontrol smart-notifier 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: mailx | mailutils 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout:Recommended packages: 2026-03-31T19:10:27.019 INFO:teuthology.orchestra.run.vm03.stdout: btrfs-tools 2026-03-31T19:10:27.032 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T19:10:27.032 INFO:teuthology.orchestra.run.vm06.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T19:10:27.032 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T19:10:27.032 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T19:10:27.032 INFO:teuthology.orchestra.run.vm06.stdout:The following additional packages will be installed: 2026-03-31T19:10:27.032 INFO:teuthology.orchestra.run.vm06.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-31T19:10:27.032 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-31T19:10:27.032 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-daemon libcephfs-proxy2 libdouble-conversion3 libfuse2 libjq1 2026-03-31T19:10:27.032 INFO:teuthology.orchestra.run.vm06.stdout: liblttng-ust1 libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-31T19:10:27.032 INFO:teuthology.orchestra.run.vm06.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 librgw2 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: libsqlite3-mod-ceph libthrift-0.16.0 nvme-cli python-asyncssh-doc 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh python3-cachetools python3-ceph-argparse 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: python3-iniconfig python3-jaraco.classes python3-jaraco.collections 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes python3-natsort python3-pluggy python3-portend 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: python3-pytest python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-toml python3-wcwidth 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: python3-webob python3-websocket python3-zc.lockfile qttranslations5-l10n 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: smartmontools socat xmlstarlet 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout:Suggested packages: 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: python3-influxdb liblua5.3-dev luarocks python-natsort-doc python-psutil-doc 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: subversion python-pygments-doc ttf-bitstream-vera python3-paste python3-dap 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: python-sklearn-doc ipython3 python-webob-doc gsmartcontrol smart-notifier 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: mailx | mailutils 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout:Recommended packages: 2026-03-31T19:10:27.033 INFO:teuthology.orchestra.run.vm06.stdout: btrfs-tools 2026-03-31T19:10:27.038 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T19:10:27.038 INFO:teuthology.orchestra.run.vm05.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T19:10:27.038 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T19:10:27.038 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T19:10:27.038 INFO:teuthology.orchestra.run.vm05.stdout:The following additional packages will be installed: 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-daemon libcephfs-proxy2 libdouble-conversion3 libfuse2 libjq1 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: liblttng-ust1 libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 librgw2 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: libsqlite3-mod-ceph libthrift-0.16.0 nvme-cli python-asyncssh-doc 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh python3-cachetools python3-ceph-argparse 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: python3-iniconfig python3-jaraco.classes python3-jaraco.collections 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-natsort python3-pluggy python3-portend 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: python3-pytest python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora python3-threadpoolctl python3-toml python3-wcwidth 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: python3-webob python3-websocket python3-zc.lockfile qttranslations5-l10n 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: smartmontools socat xmlstarlet 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout:Suggested packages: 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: python3-influxdb liblua5.3-dev luarocks python-natsort-doc python-psutil-doc 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: subversion python-pygments-doc ttf-bitstream-vera python3-paste python3-dap 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: python-sklearn-doc ipython3 python-webob-doc gsmartcontrol smart-notifier 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: mailx | mailutils 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout:Recommended packages: 2026-03-31T19:10:27.039 INFO:teuthology.orchestra.run.vm05.stdout: btrfs-tools 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-daemon libcephfs-dev libcephfs-proxy2 libcephfs2 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 libnbd0 liboath0 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 librgw2 libsqlite3-mod-ceph libthrift-0.16.0 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: nvme-cli python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-natsort 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-pluggy python3-portend python3-prettytable python3-psutil python3-py 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-pygments python3-pytest python3-rados python3-rbd python3-repoze.lru 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-rgw python3-routes python3-rsa 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-sklearn python3-sklearn-lib python3-tempora 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-toml python3-wcwidth python3-webob 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat xmlstarlet 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be upgraded: 2026-03-31T19:10:27.060 INFO:teuthology.orchestra.run.vm03.stdout: librados2 librbd1 2026-03-31T19:10:27.063 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T19:10:27.063 INFO:teuthology.orchestra.run.vm01.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T19:10:27.064 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T19:10:27.064 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T19:10:27.064 INFO:teuthology.orchestra.run.vm01.stdout:The following additional packages will be installed: 2026-03-31T19:10:27.064 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-31T19:10:27.064 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-31T19:10:27.064 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-daemon libcephfs-proxy2 libdouble-conversion3 libfuse2 libjq1 2026-03-31T19:10:27.064 INFO:teuthology.orchestra.run.vm01.stdout: liblttng-ust1 libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-31T19:10:27.064 INFO:teuthology.orchestra.run.vm01.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 librgw2 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph libthrift-0.16.0 nvme-cli python-asyncssh-doc 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh python3-cachetools python3-ceph-argparse 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: python3-iniconfig python3-jaraco.classes python3-jaraco.collections 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes python3-natsort python3-pluggy python3-portend 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: python3-pytest python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-toml python3-wcwidth 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-zc.lockfile qttranslations5-l10n 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: smartmontools socat xmlstarlet 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout:Suggested packages: 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: python3-influxdb liblua5.3-dev luarocks python-natsort-doc python-psutil-doc 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: subversion python-pygments-doc ttf-bitstream-vera python3-paste python3-dap 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: python-sklearn-doc ipython3 python-webob-doc gsmartcontrol smart-notifier 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: mailx | mailutils 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout:Recommended packages: 2026-03-31T19:10:27.065 INFO:teuthology.orchestra.run.vm01.stdout: btrfs-tools 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout:The following NEW packages will be installed: 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-daemon libcephfs-dev libcephfs-proxy2 libcephfs2 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 libnbd0 liboath0 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 librdkafka1 librgw2 libsqlite3-mod-ceph libthrift-0.16.0 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: nvme-cli python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-natsort 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: python3-pluggy python3-portend python3-prettytable python3-psutil python3-py 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: python3-pygments python3-pytest python3-rados python3-rbd python3-repoze.lru 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib python3-rgw python3-routes python3-rsa 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: python3-simplejson python3-sklearn python3-sklearn-lib python3-tempora 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: python3-threadpoolctl python3-toml python3-wcwidth python3-webob 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout: smartmontools socat xmlstarlet 2026-03-31T19:10:27.073 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be upgraded: 2026-03-31T19:10:27.074 INFO:teuthology.orchestra.run.vm06.stdout: librados2 librbd1 2026-03-31T19:10:27.082 INFO:teuthology.orchestra.run.vm05.stdout:The following NEW packages will be installed: 2026-03-31T19:10:27.082 INFO:teuthology.orchestra.run.vm05.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-31T19:10:27.082 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-31T19:10:27.082 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-31T19:10:27.082 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-daemon libcephfs-dev libcephfs-proxy2 libcephfs2 2026-03-31T19:10:27.082 INFO:teuthology.orchestra.run.vm05.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 libnbd0 liboath0 2026-03-31T19:10:27.082 INFO:teuthology.orchestra.run.vm05.stdout: libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 librdkafka1 librgw2 libsqlite3-mod-ceph libthrift-0.16.0 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: nvme-cli python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-natsort 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-pluggy python3-portend python3-prettytable python3-psutil python3-py 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-pygments python3-pytest python3-rados python3-rbd python3-repoze.lru 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib python3-rgw python3-routes python3-rsa 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplejson python3-sklearn python3-sklearn-lib python3-tempora 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-toml python3-wcwidth python3-webob 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: smartmontools socat xmlstarlet 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be upgraded: 2026-03-31T19:10:27.083 INFO:teuthology.orchestra.run.vm05.stdout: librados2 librbd1 2026-03-31T19:10:27.091 INFO:teuthology.orchestra.run.vm03.stdout:2 upgraded, 85 newly installed, 0 to remove and 50 not upgraded. 2026-03-31T19:10:27.091 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 281 MB of archives. 2026-03-31T19:10:27.091 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1092 MB of additional disk space will be used. 2026-03-31T19:10:27.091 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-31T19:10:27.103 INFO:teuthology.orchestra.run.vm06.stdout:2 upgraded, 85 newly installed, 0 to remove and 50 not upgraded. 2026-03-31T19:10:27.104 INFO:teuthology.orchestra.run.vm06.stdout:Need to get 281 MB of archives. 2026-03-31T19:10:27.104 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 1092 MB of additional disk space will be used. 2026-03-31T19:10:27.104 INFO:teuthology.orchestra.run.vm06.stdout:Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-31T19:10:27.105 INFO:teuthology.orchestra.run.vm01.stdout:The following NEW packages will be installed: 2026-03-31T19:10:27.105 INFO:teuthology.orchestra.run.vm01.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-31T19:10:27.105 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-31T19:10:27.105 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-31T19:10:27.105 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-daemon libcephfs-dev libcephfs-proxy2 libcephfs2 2026-03-31T19:10:27.105 INFO:teuthology.orchestra.run.vm01.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 libnbd0 liboath0 2026-03-31T19:10:27.105 INFO:teuthology.orchestra.run.vm01.stdout: libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-31T19:10:27.106 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 librgw2 libsqlite3-mod-ceph libthrift-0.16.0 2026-03-31T19:10:27.106 INFO:teuthology.orchestra.run.vm01.stdout: nvme-cli python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-31T19:10:27.106 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-31T19:10:27.106 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-31T19:10:27.106 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-31T19:10:27.106 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-natsort 2026-03-31T19:10:27.106 INFO:teuthology.orchestra.run.vm01.stdout: python3-pluggy python3-portend python3-prettytable python3-psutil python3-py 2026-03-31T19:10:27.106 INFO:teuthology.orchestra.run.vm01.stdout: python3-pygments python3-pytest python3-rados python3-rbd python3-repoze.lru 2026-03-31T19:10:27.106 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-rgw python3-routes python3-rsa 2026-03-31T19:10:27.106 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-sklearn python3-sklearn-lib python3-tempora 2026-03-31T19:10:27.106 INFO:teuthology.orchestra.run.vm01.stdout: python3-threadpoolctl python3-toml python3-wcwidth python3-webob 2026-03-31T19:10:27.106 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse 2026-03-31T19:10:27.106 INFO:teuthology.orchestra.run.vm01.stdout: smartmontools socat xmlstarlet 2026-03-31T19:10:27.107 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be upgraded: 2026-03-31T19:10:27.107 INFO:teuthology.orchestra.run.vm01.stdout: librados2 librbd1 2026-03-31T19:10:27.129 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-31T19:10:27.130 INFO:teuthology.orchestra.run.vm03.stdout:Get:3 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-31T19:10:27.135 INFO:teuthology.orchestra.run.vm01.stdout:2 upgraded, 85 newly installed, 0 to remove and 50 not upgraded. 2026-03-31T19:10:27.136 INFO:teuthology.orchestra.run.vm01.stdout:Need to get 281 MB of archives. 2026-03-31T19:10:27.136 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 1092 MB of additional disk space will be used. 2026-03-31T19:10:27.136 INFO:teuthology.orchestra.run.vm01.stdout:Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-31T19:10:27.137 INFO:teuthology.orchestra.run.vm03.stdout:Get:4 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-31T19:10:27.143 INFO:teuthology.orchestra.run.vm06.stdout:Get:2 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-31T19:10:27.144 INFO:teuthology.orchestra.run.vm06.stdout:Get:3 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-31T19:10:27.152 INFO:teuthology.orchestra.run.vm06.stdout:Get:4 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-31T19:10:27.162 INFO:teuthology.orchestra.run.vm05.stdout:2 upgraded, 85 newly installed, 0 to remove and 50 not upgraded. 2026-03-31T19:10:27.162 INFO:teuthology.orchestra.run.vm05.stdout:Need to get 281 MB of archives. 2026-03-31T19:10:27.162 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 1092 MB of additional disk space will be used. 2026-03-31T19:10:27.163 INFO:teuthology.orchestra.run.vm05.stdout:Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-31T19:10:27.178 INFO:teuthology.orchestra.run.vm03.stdout:Get:5 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-31T19:10:27.179 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-31T19:10:27.192 INFO:teuthology.orchestra.run.vm03.stdout:Get:7 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-31T19:10:27.194 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-31T19:10:27.194 INFO:teuthology.orchestra.run.vm03.stdout:Get:9 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-31T19:10:27.194 INFO:teuthology.orchestra.run.vm03.stdout:Get:10 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-31T19:10:27.194 INFO:teuthology.orchestra.run.vm03.stdout:Get:11 http://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-31T19:10:27.197 INFO:teuthology.orchestra.run.vm03.stdout:Get:12 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-31T19:10:27.197 INFO:teuthology.orchestra.run.vm03.stdout:Get:13 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-31T19:10:27.197 INFO:teuthology.orchestra.run.vm03.stdout:Get:14 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.2 [72.1 kB] 2026-03-31T19:10:27.198 INFO:teuthology.orchestra.run.vm01.stdout:Get:2 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-31T19:10:27.198 INFO:teuthology.orchestra.run.vm03.stdout:Get:15 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-31T19:10:27.200 INFO:teuthology.orchestra.run.vm01.stdout:Get:3 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-31T19:10:27.205 INFO:teuthology.orchestra.run.vm03.stdout:Get:16 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-31T19:10:27.205 INFO:teuthology.orchestra.run.vm03.stdout:Get:17 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-31T19:10:27.206 INFO:teuthology.orchestra.run.vm03.stdout:Get:18 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-31T19:10:27.206 INFO:teuthology.orchestra.run.vm03.stdout:Get:19 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-31T19:10:27.206 INFO:teuthology.orchestra.run.vm03.stdout:Get:20 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-31T19:10:27.211 INFO:teuthology.orchestra.run.vm06.stdout:Get:5 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-31T19:10:27.213 INFO:teuthology.orchestra.run.vm03.stdout:Get:21 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-31T19:10:27.215 INFO:teuthology.orchestra.run.vm03.stdout:Get:22 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-31T19:10:27.216 INFO:teuthology.orchestra.run.vm03.stdout:Get:23 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-31T19:10:27.217 INFO:teuthology.orchestra.run.vm03.stdout:Get:24 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-31T19:10:27.217 INFO:teuthology.orchestra.run.vm06.stdout:Get:6 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-31T19:10:27.217 INFO:teuthology.orchestra.run.vm01.stdout:Get:4 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-31T19:10:27.245 INFO:teuthology.orchestra.run.vm03.stdout:Get:25 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-31T19:10:27.245 INFO:teuthology.orchestra.run.vm03.stdout:Get:26 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-31T19:10:27.246 INFO:teuthology.orchestra.run.vm03.stdout:Get:27 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-31T19:10:27.258 INFO:teuthology.orchestra.run.vm06.stdout:Get:7 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-31T19:10:27.259 INFO:teuthology.orchestra.run.vm06.stdout:Get:8 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-31T19:10:27.260 INFO:teuthology.orchestra.run.vm06.stdout:Get:9 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-31T19:10:27.260 INFO:teuthology.orchestra.run.vm06.stdout:Get:10 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-31T19:10:27.260 INFO:teuthology.orchestra.run.vm06.stdout:Get:11 http://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-31T19:10:27.287 INFO:teuthology.orchestra.run.vm01.stdout:Get:5 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-31T19:10:27.290 INFO:teuthology.orchestra.run.vm03.stdout:Get:28 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-31T19:10:27.291 INFO:teuthology.orchestra.run.vm03.stdout:Get:29 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-31T19:10:27.291 INFO:teuthology.orchestra.run.vm03.stdout:Get:30 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-31T19:10:27.292 INFO:teuthology.orchestra.run.vm06.stdout:Get:12 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-31T19:10:27.292 INFO:teuthology.orchestra.run.vm01.stdout:Get:6 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-31T19:10:27.292 INFO:teuthology.orchestra.run.vm06.stdout:Get:13 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-31T19:10:27.292 INFO:teuthology.orchestra.run.vm06.stdout:Get:14 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.2 [72.1 kB] 2026-03-31T19:10:27.293 INFO:teuthology.orchestra.run.vm06.stdout:Get:15 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-31T19:10:27.293 INFO:teuthology.orchestra.run.vm06.stdout:Get:16 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-31T19:10:27.293 INFO:teuthology.orchestra.run.vm06.stdout:Get:17 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-31T19:10:27.293 INFO:teuthology.orchestra.run.vm06.stdout:Get:18 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-31T19:10:27.293 INFO:teuthology.orchestra.run.vm06.stdout:Get:19 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-31T19:10:27.294 INFO:teuthology.orchestra.run.vm06.stdout:Get:20 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-31T19:10:27.295 INFO:teuthology.orchestra.run.vm01.stdout:Get:7 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-31T19:10:27.296 INFO:teuthology.orchestra.run.vm01.stdout:Get:8 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-31T19:10:27.297 INFO:teuthology.orchestra.run.vm01.stdout:Get:9 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-31T19:10:27.297 INFO:teuthology.orchestra.run.vm01.stdout:Get:10 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-31T19:10:27.302 INFO:teuthology.orchestra.run.vm06.stdout:Get:21 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-31T19:10:27.306 INFO:teuthology.orchestra.run.vm06.stdout:Get:22 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-31T19:10:27.306 INFO:teuthology.orchestra.run.vm01.stdout:Get:11 http://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-31T19:10:27.307 INFO:teuthology.orchestra.run.vm06.stdout:Get:23 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-31T19:10:27.310 INFO:teuthology.orchestra.run.vm06.stdout:Get:24 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-31T19:10:27.315 INFO:teuthology.orchestra.run.vm03.stdout:Get:31 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-31T19:10:27.315 INFO:teuthology.orchestra.run.vm03.stdout:Get:32 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-31T19:10:27.315 INFO:teuthology.orchestra.run.vm03.stdout:Get:33 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-31T19:10:27.315 INFO:teuthology.orchestra.run.vm03.stdout:Get:34 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-31T19:10:27.315 INFO:teuthology.orchestra.run.vm03.stdout:Get:35 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-31T19:10:27.316 INFO:teuthology.orchestra.run.vm03.stdout:Get:36 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-31T19:10:27.317 INFO:teuthology.orchestra.run.vm03.stdout:Get:37 http://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-31T19:10:27.318 INFO:teuthology.orchestra.run.vm03.stdout:Get:38 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-31T19:10:27.320 INFO:teuthology.orchestra.run.vm03.stdout:Get:39 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-31T19:10:27.322 INFO:teuthology.orchestra.run.vm01.stdout:Get:12 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-31T19:10:27.322 INFO:teuthology.orchestra.run.vm01.stdout:Get:13 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-31T19:10:27.323 INFO:teuthology.orchestra.run.vm01.stdout:Get:14 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.2 [72.1 kB] 2026-03-31T19:10:27.323 INFO:teuthology.orchestra.run.vm01.stdout:Get:15 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-31T19:10:27.323 INFO:teuthology.orchestra.run.vm01.stdout:Get:16 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-31T19:10:27.323 INFO:teuthology.orchestra.run.vm01.stdout:Get:17 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-31T19:10:27.323 INFO:teuthology.orchestra.run.vm01.stdout:Get:18 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-31T19:10:27.324 INFO:teuthology.orchestra.run.vm03.stdout:Get:40 http://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-31T19:10:27.324 INFO:teuthology.orchestra.run.vm01.stdout:Get:19 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-31T19:10:27.324 INFO:teuthology.orchestra.run.vm01.stdout:Get:20 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-31T19:10:27.324 INFO:teuthology.orchestra.run.vm06.stdout:Get:25 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-31T19:10:27.324 INFO:teuthology.orchestra.run.vm06.stdout:Get:26 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-31T19:10:27.325 INFO:teuthology.orchestra.run.vm06.stdout:Get:27 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-31T19:10:27.329 INFO:teuthology.orchestra.run.vm03.stdout:Get:41 http://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-31T19:10:27.331 INFO:teuthology.orchestra.run.vm01.stdout:Get:21 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-31T19:10:27.334 INFO:teuthology.orchestra.run.vm01.stdout:Get:22 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-31T19:10:27.335 INFO:teuthology.orchestra.run.vm01.stdout:Get:23 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-31T19:10:27.336 INFO:teuthology.orchestra.run.vm01.stdout:Get:24 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-31T19:10:27.353 INFO:teuthology.orchestra.run.vm03.stdout:Get:42 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-31T19:10:27.361 INFO:teuthology.orchestra.run.vm01.stdout:Get:25 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-31T19:10:27.361 INFO:teuthology.orchestra.run.vm01.stdout:Get:26 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-31T19:10:27.362 INFO:teuthology.orchestra.run.vm01.stdout:Get:27 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-31T19:10:27.376 INFO:teuthology.orchestra.run.vm03.stdout:Get:43 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-31T19:10:27.394 INFO:teuthology.orchestra.run.vm03.stdout:Get:44 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-31T19:10:27.395 INFO:teuthology.orchestra.run.vm03.stdout:Get:45 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-31T19:10:27.395 INFO:teuthology.orchestra.run.vm03.stdout:Get:46 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-31T19:10:27.396 INFO:teuthology.orchestra.run.vm03.stdout:Get:47 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-31T19:10:27.396 INFO:teuthology.orchestra.run.vm03.stdout:Get:48 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-31T19:10:27.399 INFO:teuthology.orchestra.run.vm06.stdout:Get:28 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-31T19:10:27.400 INFO:teuthology.orchestra.run.vm06.stdout:Get:29 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-31T19:10:27.400 INFO:teuthology.orchestra.run.vm06.stdout:Get:30 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-31T19:10:27.401 INFO:teuthology.orchestra.run.vm01.stdout:Get:28 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-31T19:10:27.402 INFO:teuthology.orchestra.run.vm01.stdout:Get:29 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-31T19:10:27.402 INFO:teuthology.orchestra.run.vm01.stdout:Get:30 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-31T19:10:27.410 INFO:teuthology.orchestra.run.vm03.stdout:Get:49 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-31T19:10:27.410 INFO:teuthology.orchestra.run.vm03.stdout:Get:50 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-31T19:10:27.411 INFO:teuthology.orchestra.run.vm03.stdout:Get:51 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-31T19:10:27.411 INFO:teuthology.orchestra.run.vm03.stdout:Get:52 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-31T19:10:27.411 INFO:teuthology.orchestra.run.vm03.stdout:Get:53 http://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-31T19:10:27.422 INFO:teuthology.orchestra.run.vm05.stdout:Get:2 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-31T19:10:27.448 INFO:teuthology.orchestra.run.vm06.stdout:Get:31 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-31T19:10:27.448 INFO:teuthology.orchestra.run.vm06.stdout:Get:32 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-31T19:10:27.449 INFO:teuthology.orchestra.run.vm06.stdout:Get:33 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-31T19:10:27.450 INFO:teuthology.orchestra.run.vm06.stdout:Get:34 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-31T19:10:27.450 INFO:teuthology.orchestra.run.vm06.stdout:Get:35 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-31T19:10:27.450 INFO:teuthology.orchestra.run.vm01.stdout:Get:31 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-31T19:10:27.450 INFO:teuthology.orchestra.run.vm01.stdout:Get:32 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-31T19:10:27.450 INFO:teuthology.orchestra.run.vm06.stdout:Get:36 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-31T19:10:27.451 INFO:teuthology.orchestra.run.vm01.stdout:Get:33 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-31T19:10:27.451 INFO:teuthology.orchestra.run.vm01.stdout:Get:34 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-31T19:10:27.451 INFO:teuthology.orchestra.run.vm01.stdout:Get:35 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-31T19:10:27.451 INFO:teuthology.orchestra.run.vm01.stdout:Get:36 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-31T19:10:27.452 INFO:teuthology.orchestra.run.vm06.stdout:Get:37 http://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-31T19:10:27.453 INFO:teuthology.orchestra.run.vm01.stdout:Get:37 http://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-31T19:10:27.454 INFO:teuthology.orchestra.run.vm03.stdout:Get:54 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-31T19:10:27.455 INFO:teuthology.orchestra.run.vm01.stdout:Get:38 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-31T19:10:27.456 INFO:teuthology.orchestra.run.vm01.stdout:Get:39 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-31T19:10:27.456 INFO:teuthology.orchestra.run.vm06.stdout:Get:38 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-31T19:10:27.458 INFO:teuthology.orchestra.run.vm01.stdout:Get:40 http://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-31T19:10:27.460 INFO:teuthology.orchestra.run.vm06.stdout:Get:39 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-31T19:10:27.461 INFO:teuthology.orchestra.run.vm06.stdout:Get:40 http://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-31T19:10:27.464 INFO:teuthology.orchestra.run.vm01.stdout:Get:41 http://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-31T19:10:27.466 INFO:teuthology.orchestra.run.vm06.stdout:Get:41 http://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-31T19:10:27.468 INFO:teuthology.orchestra.run.vm01.stdout:Get:42 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-31T19:10:27.471 INFO:teuthology.orchestra.run.vm05.stdout:Get:3 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-31T19:10:27.473 INFO:teuthology.orchestra.run.vm06.stdout:Get:42 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-31T19:10:27.475 INFO:teuthology.orchestra.run.vm01.stdout:Get:43 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-31T19:10:27.480 INFO:teuthology.orchestra.run.vm01.stdout:Get:44 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-31T19:10:27.480 INFO:teuthology.orchestra.run.vm01.stdout:Get:45 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-31T19:10:27.480 INFO:teuthology.orchestra.run.vm01.stdout:Get:46 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-31T19:10:27.482 INFO:teuthology.orchestra.run.vm01.stdout:Get:47 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-31T19:10:27.483 INFO:teuthology.orchestra.run.vm01.stdout:Get:48 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-31T19:10:27.486 INFO:teuthology.orchestra.run.vm06.stdout:Get:43 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-31T19:10:27.492 INFO:teuthology.orchestra.run.vm01.stdout:Get:49 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-31T19:10:27.493 INFO:teuthology.orchestra.run.vm01.stdout:Get:50 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-31T19:10:27.495 INFO:teuthology.orchestra.run.vm06.stdout:Get:44 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-31T19:10:27.495 INFO:teuthology.orchestra.run.vm06.stdout:Get:45 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-31T19:10:27.495 INFO:teuthology.orchestra.run.vm06.stdout:Get:46 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-31T19:10:27.495 INFO:teuthology.orchestra.run.vm05.stdout:Get:4 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-31T19:10:27.496 INFO:teuthology.orchestra.run.vm01.stdout:Get:51 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-31T19:10:27.496 INFO:teuthology.orchestra.run.vm01.stdout:Get:52 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-31T19:10:27.498 INFO:teuthology.orchestra.run.vm01.stdout:Get:53 http://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-31T19:10:27.500 INFO:teuthology.orchestra.run.vm06.stdout:Get:47 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-31T19:10:27.502 INFO:teuthology.orchestra.run.vm06.stdout:Get:48 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-31T19:10:27.522 INFO:teuthology.orchestra.run.vm06.stdout:Get:49 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-31T19:10:27.523 INFO:teuthology.orchestra.run.vm06.stdout:Get:50 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-31T19:10:27.523 INFO:teuthology.orchestra.run.vm01.stdout:Get:54 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-31T19:10:27.529 INFO:teuthology.orchestra.run.vm06.stdout:Get:51 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-31T19:10:27.531 INFO:teuthology.orchestra.run.vm06.stdout:Get:52 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-31T19:10:27.532 INFO:teuthology.orchestra.run.vm06.stdout:Get:53 http://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-31T19:10:27.584 INFO:teuthology.orchestra.run.vm06.stdout:Get:54 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-31T19:10:27.616 INFO:teuthology.orchestra.run.vm05.stdout:Get:5 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-31T19:10:27.621 INFO:teuthology.orchestra.run.vm05.stdout:Get:6 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-31T19:10:27.633 INFO:teuthology.orchestra.run.vm05.stdout:Get:7 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-31T19:10:27.637 INFO:teuthology.orchestra.run.vm05.stdout:Get:8 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-31T19:10:27.638 INFO:teuthology.orchestra.run.vm05.stdout:Get:9 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-31T19:10:27.638 INFO:teuthology.orchestra.run.vm05.stdout:Get:10 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-31T19:10:27.639 INFO:teuthology.orchestra.run.vm05.stdout:Get:11 http://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-31T19:10:27.647 INFO:teuthology.orchestra.run.vm05.stdout:Get:12 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-31T19:10:27.647 INFO:teuthology.orchestra.run.vm05.stdout:Get:13 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-31T19:10:27.650 INFO:teuthology.orchestra.run.vm05.stdout:Get:14 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.2 [72.1 kB] 2026-03-31T19:10:27.664 INFO:teuthology.orchestra.run.vm06.stdout:Get:55 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 20.2.0-721-g5bb32787-1jammy [2867 kB] 2026-03-31T19:10:27.664 INFO:teuthology.orchestra.run.vm03.stdout:Get:55 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 20.2.0-721-g5bb32787-1jammy [2867 kB] 2026-03-31T19:10:27.671 INFO:teuthology.orchestra.run.vm05.stdout:Get:15 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 20.2.0-721-g5bb32787-1jammy [2867 kB] 2026-03-31T19:10:27.679 INFO:teuthology.orchestra.run.vm05.stdout:Get:16 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-31T19:10:27.679 INFO:teuthology.orchestra.run.vm05.stdout:Get:17 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-31T19:10:27.680 INFO:teuthology.orchestra.run.vm05.stdout:Get:18 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-31T19:10:27.680 INFO:teuthology.orchestra.run.vm05.stdout:Get:19 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-31T19:10:27.680 INFO:teuthology.orchestra.run.vm05.stdout:Get:20 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-31T19:10:27.680 INFO:teuthology.orchestra.run.vm05.stdout:Get:21 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-31T19:10:27.681 INFO:teuthology.orchestra.run.vm05.stdout:Get:22 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-31T19:10:27.682 INFO:teuthology.orchestra.run.vm05.stdout:Get:23 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-31T19:10:27.682 INFO:teuthology.orchestra.run.vm05.stdout:Get:24 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-31T19:10:27.715 INFO:teuthology.orchestra.run.vm05.stdout:Get:25 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-31T19:10:27.716 INFO:teuthology.orchestra.run.vm05.stdout:Get:26 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-31T19:10:27.716 INFO:teuthology.orchestra.run.vm05.stdout:Get:27 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-31T19:10:27.717 INFO:teuthology.orchestra.run.vm05.stdout:Get:28 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-31T19:10:27.780 INFO:teuthology.orchestra.run.vm05.stdout:Get:29 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-31T19:10:27.781 INFO:teuthology.orchestra.run.vm05.stdout:Get:30 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-31T19:10:27.781 INFO:teuthology.orchestra.run.vm05.stdout:Get:31 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-31T19:10:27.811 INFO:teuthology.orchestra.run.vm05.stdout:Get:32 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-31T19:10:27.812 INFO:teuthology.orchestra.run.vm05.stdout:Get:33 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-31T19:10:27.812 INFO:teuthology.orchestra.run.vm05.stdout:Get:34 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-31T19:10:27.812 INFO:teuthology.orchestra.run.vm05.stdout:Get:35 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-31T19:10:27.812 INFO:teuthology.orchestra.run.vm05.stdout:Get:36 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-31T19:10:27.813 INFO:teuthology.orchestra.run.vm05.stdout:Get:37 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-31T19:10:27.815 INFO:teuthology.orchestra.run.vm05.stdout:Get:38 http://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-31T19:10:27.845 INFO:teuthology.orchestra.run.vm05.stdout:Get:39 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-31T19:10:27.847 INFO:teuthology.orchestra.run.vm05.stdout:Get:40 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-31T19:10:27.847 INFO:teuthology.orchestra.run.vm05.stdout:Get:41 http://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-31T19:10:27.850 INFO:teuthology.orchestra.run.vm05.stdout:Get:42 http://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-31T19:10:27.853 INFO:teuthology.orchestra.run.vm05.stdout:Get:43 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-31T19:10:27.903 INFO:teuthology.orchestra.run.vm05.stdout:Get:44 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-31T19:10:27.909 INFO:teuthology.orchestra.run.vm05.stdout:Get:45 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-31T19:10:27.909 INFO:teuthology.orchestra.run.vm05.stdout:Get:46 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-31T19:10:27.909 INFO:teuthology.orchestra.run.vm05.stdout:Get:47 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-31T19:10:27.910 INFO:teuthology.orchestra.run.vm05.stdout:Get:48 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-31T19:10:27.911 INFO:teuthology.orchestra.run.vm05.stdout:Get:49 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-31T19:10:27.943 INFO:teuthology.orchestra.run.vm05.stdout:Get:50 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-31T19:10:27.943 INFO:teuthology.orchestra.run.vm05.stdout:Get:51 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-31T19:10:27.945 INFO:teuthology.orchestra.run.vm05.stdout:Get:52 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-31T19:10:27.946 INFO:teuthology.orchestra.run.vm05.stdout:Get:53 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-31T19:10:27.973 INFO:teuthology.orchestra.run.vm05.stdout:Get:54 http://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-31T19:10:28.037 INFO:teuthology.orchestra.run.vm05.stdout:Get:55 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-31T19:10:28.172 INFO:teuthology.orchestra.run.vm01.stdout:Get:55 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 20.2.0-721-g5bb32787-1jammy [2867 kB] 2026-03-31T19:10:29.240 INFO:teuthology.orchestra.run.vm06.stdout:Get:56 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 20.2.0-721-g5bb32787-1jammy [3571 kB] 2026-03-31T19:10:29.247 INFO:teuthology.orchestra.run.vm05.stdout:Get:56 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 20.2.0-721-g5bb32787-1jammy [3571 kB] 2026-03-31T19:10:29.269 INFO:teuthology.orchestra.run.vm03.stdout:Get:56 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 20.2.0-721-g5bb32787-1jammy [3571 kB] 2026-03-31T19:10:30.143 INFO:teuthology.orchestra.run.vm06.stdout:Get:57 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 20.2.0-721-g5bb32787-1jammy [831 kB] 2026-03-31T19:10:30.150 INFO:teuthology.orchestra.run.vm05.stdout:Get:57 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 20.2.0-721-g5bb32787-1jammy [831 kB] 2026-03-31T19:10:30.189 INFO:teuthology.orchestra.run.vm03.stdout:Get:57 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 20.2.0-721-g5bb32787-1jammy [831 kB] 2026-03-31T19:10:30.366 INFO:teuthology.orchestra.run.vm06.stdout:Get:58 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 20.2.0-721-g5bb32787-1jammy [364 kB] 2026-03-31T19:10:30.369 INFO:teuthology.orchestra.run.vm06.stdout:Get:59 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 20.2.0-721-g5bb32787-1jammy [32.9 kB] 2026-03-31T19:10:30.373 INFO:teuthology.orchestra.run.vm05.stdout:Get:58 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 20.2.0-721-g5bb32787-1jammy [364 kB] 2026-03-31T19:10:30.376 INFO:teuthology.orchestra.run.vm05.stdout:Get:59 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 20.2.0-721-g5bb32787-1jammy [32.9 kB] 2026-03-31T19:10:30.376 INFO:teuthology.orchestra.run.vm05.stdout:Get:60 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 20.2.0-721-g5bb32787-1jammy [184 kB] 2026-03-31T19:10:30.411 INFO:teuthology.orchestra.run.vm06.stdout:Get:60 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 20.2.0-721-g5bb32787-1jammy [184 kB] 2026-03-31T19:10:30.416 INFO:teuthology.orchestra.run.vm03.stdout:Get:58 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 20.2.0-721-g5bb32787-1jammy [364 kB] 2026-03-31T19:10:30.419 INFO:teuthology.orchestra.run.vm03.stdout:Get:59 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 20.2.0-721-g5bb32787-1jammy [32.9 kB] 2026-03-31T19:10:30.419 INFO:teuthology.orchestra.run.vm03.stdout:Get:60 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 20.2.0-721-g5bb32787-1jammy [184 kB] 2026-03-31T19:10:30.479 INFO:teuthology.orchestra.run.vm06.stdout:Get:61 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 20.2.0-721-g5bb32787-1jammy [83.9 kB] 2026-03-31T19:10:30.479 INFO:teuthology.orchestra.run.vm06.stdout:Get:62 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 20.2.0-721-g5bb32787-1jammy [341 kB] 2026-03-31T19:10:30.486 INFO:teuthology.orchestra.run.vm05.stdout:Get:61 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 20.2.0-721-g5bb32787-1jammy [83.9 kB] 2026-03-31T19:10:30.487 INFO:teuthology.orchestra.run.vm05.stdout:Get:62 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 20.2.0-721-g5bb32787-1jammy [341 kB] 2026-03-31T19:10:30.530 INFO:teuthology.orchestra.run.vm03.stdout:Get:61 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 20.2.0-721-g5bb32787-1jammy [83.9 kB] 2026-03-31T19:10:30.531 INFO:teuthology.orchestra.run.vm03.stdout:Get:62 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 20.2.0-721-g5bb32787-1jammy [341 kB] 2026-03-31T19:10:30.590 INFO:teuthology.orchestra.run.vm06.stdout:Get:63 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 20.2.0-721-g5bb32787-1jammy [8696 kB] 2026-03-31T19:10:30.597 INFO:teuthology.orchestra.run.vm05.stdout:Get:63 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 20.2.0-721-g5bb32787-1jammy [8696 kB] 2026-03-31T19:10:30.644 INFO:teuthology.orchestra.run.vm03.stdout:Get:63 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 20.2.0-721-g5bb32787-1jammy [8696 kB] 2026-03-31T19:10:31.827 INFO:teuthology.orchestra.run.vm06.stdout:Get:64 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 20.2.0-721-g5bb32787-1jammy [112 kB] 2026-03-31T19:10:31.828 INFO:teuthology.orchestra.run.vm06.stdout:Get:65 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 20.2.0-721-g5bb32787-1jammy [261 kB] 2026-03-31T19:10:31.830 INFO:teuthology.orchestra.run.vm06.stdout:Get:66 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 20.2.0-721-g5bb32787-1jammy [29.3 MB] 2026-03-31T19:10:31.835 INFO:teuthology.orchestra.run.vm05.stdout:Get:64 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 20.2.0-721-g5bb32787-1jammy [112 kB] 2026-03-31T19:10:31.836 INFO:teuthology.orchestra.run.vm05.stdout:Get:65 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 20.2.0-721-g5bb32787-1jammy [261 kB] 2026-03-31T19:10:31.838 INFO:teuthology.orchestra.run.vm05.stdout:Get:66 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 20.2.0-721-g5bb32787-1jammy [29.3 MB] 2026-03-31T19:10:31.904 INFO:teuthology.orchestra.run.vm03.stdout:Get:64 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 20.2.0-721-g5bb32787-1jammy [112 kB] 2026-03-31T19:10:31.905 INFO:teuthology.orchestra.run.vm03.stdout:Get:65 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 20.2.0-721-g5bb32787-1jammy [261 kB] 2026-03-31T19:10:31.907 INFO:teuthology.orchestra.run.vm03.stdout:Get:66 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 20.2.0-721-g5bb32787-1jammy [29.3 MB] 2026-03-31T19:10:33.911 INFO:teuthology.orchestra.run.vm06.stdout:Get:67 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 20.2.0-721-g5bb32787-1jammy [5416 kB] 2026-03-31T19:10:33.919 INFO:teuthology.orchestra.run.vm05.stdout:Get:67 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 20.2.0-721-g5bb32787-1jammy [5416 kB] 2026-03-31T19:10:33.984 INFO:teuthology.orchestra.run.vm03.stdout:Get:67 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 20.2.0-721-g5bb32787-1jammy [5416 kB] 2026-03-31T19:10:34.244 INFO:teuthology.orchestra.run.vm06.stdout:Get:68 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 20.2.0-721-g5bb32787-1jammy [246 kB] 2026-03-31T19:10:34.247 INFO:teuthology.orchestra.run.vm05.stdout:Get:68 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 20.2.0-721-g5bb32787-1jammy [246 kB] 2026-03-31T19:10:34.248 INFO:teuthology.orchestra.run.vm06.stdout:Get:69 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 20.2.0-721-g5bb32787-1jammy [124 kB] 2026-03-31T19:10:34.250 INFO:teuthology.orchestra.run.vm05.stdout:Get:69 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 20.2.0-721-g5bb32787-1jammy [124 kB] 2026-03-31T19:10:34.252 INFO:teuthology.orchestra.run.vm05.stdout:Get:70 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 20.2.0-721-g5bb32787-1jammy [907 kB] 2026-03-31T19:10:34.267 INFO:teuthology.orchestra.run.vm05.stdout:Get:71 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 20.2.0-721-g5bb32787-1jammy [6393 kB] 2026-03-31T19:10:34.281 INFO:teuthology.orchestra.run.vm06.stdout:Get:70 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 20.2.0-721-g5bb32787-1jammy [907 kB] 2026-03-31T19:10:34.315 INFO:teuthology.orchestra.run.vm03.stdout:Get:68 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 20.2.0-721-g5bb32787-1jammy [246 kB] 2026-03-31T19:10:34.317 INFO:teuthology.orchestra.run.vm03.stdout:Get:69 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 20.2.0-721-g5bb32787-1jammy [124 kB] 2026-03-31T19:10:34.318 INFO:teuthology.orchestra.run.vm03.stdout:Get:70 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 20.2.0-721-g5bb32787-1jammy [907 kB] 2026-03-31T19:10:34.343 INFO:teuthology.orchestra.run.vm06.stdout:Get:71 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 20.2.0-721-g5bb32787-1jammy [6393 kB] 2026-03-31T19:10:34.350 INFO:teuthology.orchestra.run.vm03.stdout:Get:71 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 20.2.0-721-g5bb32787-1jammy [6393 kB] 2026-03-31T19:10:34.616 INFO:teuthology.orchestra.run.vm05.stdout:Get:72 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 20.2.0-721-g5bb32787-1jammy [21.7 MB] 2026-03-31T19:10:34.702 INFO:teuthology.orchestra.run.vm06.stdout:Get:72 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 20.2.0-721-g5bb32787-1jammy [21.7 MB] 2026-03-31T19:10:34.771 INFO:teuthology.orchestra.run.vm03.stdout:Get:72 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 20.2.0-721-g5bb32787-1jammy [21.7 MB] 2026-03-31T19:10:35.851 INFO:teuthology.orchestra.run.vm05.stdout:Get:73 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 20.2.0-721-g5bb32787-1jammy [14.1 kB] 2026-03-31T19:10:35.851 INFO:teuthology.orchestra.run.vm05.stdout:Get:74 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 20.2.0-721-g5bb32787-1jammy [955 kB] 2026-03-31T19:10:35.866 INFO:teuthology.orchestra.run.vm05.stdout:Get:75 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 20.2.0-721-g5bb32787-1jammy [2341 kB] 2026-03-31T19:10:35.961 INFO:teuthology.orchestra.run.vm01.stdout:Get:56 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 20.2.0-721-g5bb32787-1jammy [3571 kB] 2026-03-31T19:10:36.008 INFO:teuthology.orchestra.run.vm03.stdout:Get:73 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 20.2.0-721-g5bb32787-1jammy [14.1 kB] 2026-03-31T19:10:36.023 INFO:teuthology.orchestra.run.vm06.stdout:Get:73 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 20.2.0-721-g5bb32787-1jammy [14.1 kB] 2026-03-31T19:10:36.023 INFO:teuthology.orchestra.run.vm06.stdout:Get:74 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 20.2.0-721-g5bb32787-1jammy [955 kB] 2026-03-31T19:10:36.025 INFO:teuthology.orchestra.run.vm03.stdout:Get:74 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 20.2.0-721-g5bb32787-1jammy [955 kB] 2026-03-31T19:10:36.038 INFO:teuthology.orchestra.run.vm03.stdout:Get:75 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 20.2.0-721-g5bb32787-1jammy [2341 kB] 2026-03-31T19:10:36.053 INFO:teuthology.orchestra.run.vm06.stdout:Get:75 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 20.2.0-721-g5bb32787-1jammy [2341 kB] 2026-03-31T19:10:36.054 INFO:teuthology.orchestra.run.vm05.stdout:Get:76 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 20.2.0-721-g5bb32787-1jammy [1049 kB] 2026-03-31T19:10:36.080 INFO:teuthology.orchestra.run.vm05.stdout:Get:77 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 20.2.0-721-g5bb32787-1jammy [179 kB] 2026-03-31T19:10:36.083 INFO:teuthology.orchestra.run.vm05.stdout:Get:78 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 20.2.0-721-g5bb32787-1jammy [45.5 MB] 2026-03-31T19:10:36.165 INFO:teuthology.orchestra.run.vm03.stdout:Get:76 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 20.2.0-721-g5bb32787-1jammy [1049 kB] 2026-03-31T19:10:36.224 INFO:teuthology.orchestra.run.vm06.stdout:Get:76 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 20.2.0-721-g5bb32787-1jammy [1049 kB] 2026-03-31T19:10:36.260 INFO:teuthology.orchestra.run.vm03.stdout:Get:77 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 20.2.0-721-g5bb32787-1jammy [179 kB] 2026-03-31T19:10:36.262 INFO:teuthology.orchestra.run.vm03.stdout:Get:78 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 20.2.0-721-g5bb32787-1jammy [45.5 MB] 2026-03-31T19:10:36.264 INFO:teuthology.orchestra.run.vm06.stdout:Get:77 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 20.2.0-721-g5bb32787-1jammy [179 kB] 2026-03-31T19:10:36.270 INFO:teuthology.orchestra.run.vm06.stdout:Get:78 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 20.2.0-721-g5bb32787-1jammy [45.5 MB] 2026-03-31T19:10:37.971 INFO:teuthology.orchestra.run.vm01.stdout:Get:57 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 20.2.0-721-g5bb32787-1jammy [831 kB] 2026-03-31T19:10:38.298 INFO:teuthology.orchestra.run.vm01.stdout:Get:58 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 20.2.0-721-g5bb32787-1jammy [364 kB] 2026-03-31T19:10:38.412 INFO:teuthology.orchestra.run.vm01.stdout:Get:59 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 20.2.0-721-g5bb32787-1jammy [32.9 kB] 2026-03-31T19:10:38.414 INFO:teuthology.orchestra.run.vm01.stdout:Get:60 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 20.2.0-721-g5bb32787-1jammy [184 kB] 2026-03-31T19:10:38.422 INFO:teuthology.orchestra.run.vm01.stdout:Get:61 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 20.2.0-721-g5bb32787-1jammy [83.9 kB] 2026-03-31T19:10:38.520 INFO:teuthology.orchestra.run.vm01.stdout:Get:62 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 20.2.0-721-g5bb32787-1jammy [341 kB] 2026-03-31T19:10:38.603 INFO:teuthology.orchestra.run.vm05.stdout:Get:79 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 20.2.0-721-g5bb32787-1jammy [8625 kB] 2026-03-31T19:10:38.631 INFO:teuthology.orchestra.run.vm01.stdout:Get:63 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 20.2.0-721-g5bb32787-1jammy [8696 kB] 2026-03-31T19:10:38.887 INFO:teuthology.orchestra.run.vm03.stdout:Get:79 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 20.2.0-721-g5bb32787-1jammy [8625 kB] 2026-03-31T19:10:38.977 INFO:teuthology.orchestra.run.vm06.stdout:Get:79 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 20.2.0-721-g5bb32787-1jammy [8625 kB] 2026-03-31T19:10:39.057 INFO:teuthology.orchestra.run.vm05.stdout:Get:80 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 20.2.0-721-g5bb32787-1jammy [14.2 kB] 2026-03-31T19:10:39.057 INFO:teuthology.orchestra.run.vm05.stdout:Get:81 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 20.2.0-721-g5bb32787-1jammy [99.5 MB] 2026-03-31T19:10:39.348 INFO:teuthology.orchestra.run.vm03.stdout:Get:80 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 20.2.0-721-g5bb32787-1jammy [14.2 kB] 2026-03-31T19:10:39.348 INFO:teuthology.orchestra.run.vm03.stdout:Get:81 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 20.2.0-721-g5bb32787-1jammy [99.5 MB] 2026-03-31T19:10:39.550 INFO:teuthology.orchestra.run.vm06.stdout:Get:80 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 20.2.0-721-g5bb32787-1jammy [14.2 kB] 2026-03-31T19:10:39.550 INFO:teuthology.orchestra.run.vm06.stdout:Get:81 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 20.2.0-721-g5bb32787-1jammy [99.5 MB] 2026-03-31T19:10:41.218 INFO:teuthology.orchestra.run.vm01.stdout:Get:64 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 20.2.0-721-g5bb32787-1jammy [112 kB] 2026-03-31T19:10:41.304 INFO:teuthology.orchestra.run.vm01.stdout:Get:65 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 20.2.0-721-g5bb32787-1jammy [261 kB] 2026-03-31T19:10:41.330 INFO:teuthology.orchestra.run.vm01.stdout:Get:66 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 20.2.0-721-g5bb32787-1jammy [29.3 MB] 2026-03-31T19:10:44.345 INFO:teuthology.orchestra.run.vm05.stdout:Get:82 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 20.2.0-721-g5bb32787-1jammy [135 kB] 2026-03-31T19:10:44.346 INFO:teuthology.orchestra.run.vm05.stdout:Get:83 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-daemon amd64 20.2.0-721-g5bb32787-1jammy [43.2 kB] 2026-03-31T19:10:44.346 INFO:teuthology.orchestra.run.vm05.stdout:Get:84 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-proxy2 amd64 20.2.0-721-g5bb32787-1jammy [30.7 kB] 2026-03-31T19:10:44.346 INFO:teuthology.orchestra.run.vm05.stdout:Get:85 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 20.2.0-721-g5bb32787-1jammy [41.4 kB] 2026-03-31T19:10:44.346 INFO:teuthology.orchestra.run.vm05.stdout:Get:86 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 20.2.0-721-g5bb32787-1jammy [25.1 MB] 2026-03-31T19:10:44.589 INFO:teuthology.orchestra.run.vm03.stdout:Get:82 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 20.2.0-721-g5bb32787-1jammy [135 kB] 2026-03-31T19:10:44.589 INFO:teuthology.orchestra.run.vm03.stdout:Get:83 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-daemon amd64 20.2.0-721-g5bb32787-1jammy [43.2 kB] 2026-03-31T19:10:44.589 INFO:teuthology.orchestra.run.vm03.stdout:Get:84 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-proxy2 amd64 20.2.0-721-g5bb32787-1jammy [30.7 kB] 2026-03-31T19:10:44.590 INFO:teuthology.orchestra.run.vm03.stdout:Get:85 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 20.2.0-721-g5bb32787-1jammy [41.4 kB] 2026-03-31T19:10:44.590 INFO:teuthology.orchestra.run.vm03.stdout:Get:86 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 20.2.0-721-g5bb32787-1jammy [25.1 MB] 2026-03-31T19:10:45.616 INFO:teuthology.orchestra.run.vm05.stdout:Get:87 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 20.2.0-721-g5bb32787-1jammy [97.5 kB] 2026-03-31T19:10:45.739 INFO:teuthology.orchestra.run.vm03.stdout:Get:87 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 20.2.0-721-g5bb32787-1jammy [97.5 kB] 2026-03-31T19:10:45.888 INFO:teuthology.orchestra.run.vm05.stdout:Fetched 281 MB in 19s (15.1 MB/s) 2026-03-31T19:10:45.904 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-31T19:10:45.940 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 119262 files and directories currently installed.) 2026-03-31T19:10:45.942 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../00-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-31T19:10:45.944 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-31T19:10:45.965 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-31T19:10:45.972 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../01-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-31T19:10:45.972 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-31T19:10:45.990 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-31T19:10:45.997 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../02-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-31T19:10:45.998 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-31T19:10:46.021 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-31T19:10:46.027 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../03-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-31T19:10:46.032 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 281 MB in 19s (15.0 MB/s) 2026-03-31T19:10:46.032 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:46.072 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-31T19:10:46.084 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-31T19:10:46.091 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../04-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-31T19:10:46.092 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:46.097 INFO:teuthology.orchestra.run.vm06.stdout:Get:82 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 20.2.0-721-g5bb32787-1jammy [135 kB] 2026-03-31T19:10:46.097 INFO:teuthology.orchestra.run.vm06.stdout:Get:83 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-daemon amd64 20.2.0-721-g5bb32787-1jammy [43.2 kB] 2026-03-31T19:10:46.097 INFO:teuthology.orchestra.run.vm06.stdout:Get:84 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-proxy2 amd64 20.2.0-721-g5bb32787-1jammy [30.7 kB] 2026-03-31T19:10:46.098 INFO:teuthology.orchestra.run.vm06.stdout:Get:85 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 20.2.0-721-g5bb32787-1jammy [41.4 kB] 2026-03-31T19:10:46.098 INFO:teuthology.orchestra.run.vm06.stdout:Get:86 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 20.2.0-721-g5bb32787-1jammy [25.1 MB] 2026-03-31T19:10:46.105 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 119262 files and directories currently installed.) 2026-03-31T19:10:46.108 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../00-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-31T19:10:46.110 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-31T19:10:46.112 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-31T19:10:46.118 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../05-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-31T19:10:46.119 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:46.134 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-31T19:10:46.136 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../01-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-31T19:10:46.137 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-31T19:10:46.142 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-31T19:10:46.148 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../06-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-31T19:10:46.149 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-31T19:10:46.152 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-31T19:10:46.159 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../02-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-31T19:10:46.160 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-31T19:10:46.175 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../07-librbd1_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.177 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking librbd1 (20.2.0-721-g5bb32787-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-31T19:10:46.179 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-31T19:10:46.186 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../03-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-31T19:10:46.190 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:46.235 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-31T19:10:46.240 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../04-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-31T19:10:46.241 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:46.244 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../08-librados2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.247 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking librados2 (20.2.0-721-g5bb32787-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-31T19:10:46.259 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-31T19:10:46.264 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../05-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-31T19:10:46.265 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:46.299 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-31T19:10:46.305 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../06-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-31T19:10:46.306 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-31T19:10:46.308 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libnbd0. 2026-03-31T19:10:46.314 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../09-libnbd0_1.10.5-1_amd64.deb ... 2026-03-31T19:10:46.315 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-31T19:10:46.329 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../07-librbd1_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.331 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librbd1 (20.2.0-721-g5bb32787-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-31T19:10:46.331 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libcephfs2. 2026-03-31T19:10:46.337 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../10-libcephfs2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.338 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libcephfs2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.361 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-rados. 2026-03-31T19:10:46.368 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../11-python3-rados_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.368 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-rados (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.391 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-31T19:10:46.396 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../12-python3-ceph-argparse_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:46.397 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-ceph-argparse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.399 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../08-librados2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.402 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librados2 (20.2.0-721-g5bb32787-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-31T19:10:46.412 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-cephfs. 2026-03-31T19:10:46.418 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../13-python3-cephfs_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.419 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-cephfs (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.446 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-31T19:10:46.452 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../14-python3-ceph-common_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:46.452 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.461 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libnbd0. 2026-03-31T19:10:46.466 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../09-libnbd0_1.10.5-1_amd64.deb ... 2026-03-31T19:10:46.467 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-31T19:10:46.473 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-31T19:10:46.479 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../15-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-31T19:10:46.479 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-31T19:10:46.483 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs2. 2026-03-31T19:10:46.488 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../10-libcephfs2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.489 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.497 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-prettytable. 2026-03-31T19:10:46.503 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../16-python3-prettytable_2.5.0-2_all.deb ... 2026-03-31T19:10:46.503 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-31T19:10:46.513 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rados. 2026-03-31T19:10:46.518 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-rbd. 2026-03-31T19:10:46.520 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../11-python3-rados_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.520 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rados (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.524 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../17-python3-rbd_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.524 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-rbd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.541 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-31T19:10:46.545 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-31T19:10:46.547 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../12-python3-ceph-argparse_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:46.548 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-argparse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.551 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../18-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-31T19:10:46.551 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-31T19:10:46.563 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cephfs. 2026-03-31T19:10:46.569 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../13-python3-cephfs_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.570 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cephfs (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.573 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package librgw2. 2026-03-31T19:10:46.579 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../19-librgw2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.580 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking librgw2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.588 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-31T19:10:46.594 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../14-python3-ceph-common_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:46.595 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.617 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-31T19:10:46.624 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../15-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-31T19:10:46.624 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-31T19:10:46.642 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-prettytable. 2026-03-31T19:10:46.648 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../16-python3-prettytable_2.5.0-2_all.deb ... 2026-03-31T19:10:46.649 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-31T19:10:46.664 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rbd. 2026-03-31T19:10:46.670 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../17-python3-rbd_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.693 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rbd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.711 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-rgw. 2026-03-31T19:10:46.713 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-31T19:10:46.717 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../20-python3-rgw_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.718 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-rgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.719 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../18-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-31T19:10:46.720 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-31T19:10:46.810 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-31T19:10:46.816 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../21-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-31T19:10:46.817 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librgw2. 2026-03-31T19:10:46.817 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-31T19:10:46.824 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../19-librgw2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.825 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librgw2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.836 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libradosstriper1. 2026-03-31T19:10:46.842 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../22-libradosstriper1_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.843 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libradosstriper1 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.866 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-common. 2026-03-31T19:10:46.873 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../23-ceph-common_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.874 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.968 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rgw. 2026-03-31T19:10:46.974 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../20-python3-rgw_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:46.975 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:46.993 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-31T19:10:46.999 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../21-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-31T19:10:47.000 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-31T19:10:47.018 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libradosstriper1. 2026-03-31T19:10:47.024 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../22-libradosstriper1_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:47.025 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libradosstriper1 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:47.049 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-common. 2026-03-31T19:10:47.056 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../23-ceph-common_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:47.057 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:47.287 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-base. 2026-03-31T19:10:47.294 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../24-ceph-base_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:47.299 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-base (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:47.462 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-31T19:10:47.468 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../25-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-31T19:10:47.468 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-31T19:10:47.471 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-base. 2026-03-31T19:10:47.477 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../24-ceph-base_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:47.482 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-base (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:47.484 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-cheroot. 2026-03-31T19:10:47.490 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../26-python3-cheroot_8.5.2+ds1-1ubuntu3.2_all.deb ... 2026-03-31T19:10:47.491 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.2) ... 2026-03-31T19:10:47.510 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-31T19:10:47.515 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../27-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-31T19:10:47.522 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-31T19:10:47.537 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-31T19:10:47.544 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../28-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-31T19:10:47.552 INFO:teuthology.orchestra.run.vm06.stdout:Get:87 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 20.2.0-721-g5bb32787-1jammy [97.5 kB] 2026-03-31T19:10:47.554 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-31T19:10:47.570 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-31T19:10:47.573 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-31T19:10:47.576 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../29-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-31T19:10:47.577 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-31T19:10:47.579 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../25-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-31T19:10:47.580 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-31T19:10:47.593 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-tempora. 2026-03-31T19:10:47.597 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cheroot. 2026-03-31T19:10:47.599 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../30-python3-tempora_4.1.2-1_all.deb ... 2026-03-31T19:10:47.599 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-31T19:10:47.603 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../26-python3-cheroot_8.5.2+ds1-1ubuntu3.2_all.deb ... 2026-03-31T19:10:47.604 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.2) ... 2026-03-31T19:10:47.615 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-portend. 2026-03-31T19:10:47.622 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../31-python3-portend_3.0.0-1_all.deb ... 2026-03-31T19:10:47.623 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-31T19:10:47.623 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-31T19:10:47.630 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../27-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-31T19:10:47.631 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-31T19:10:47.640 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-31T19:10:47.647 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../32-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-31T19:10:47.648 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-31T19:10:47.648 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-31T19:10:47.655 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../28-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-31T19:10:47.656 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-31T19:10:47.664 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-31T19:10:47.670 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../33-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-31T19:10:47.671 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-31T19:10:47.672 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-31T19:10:47.681 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../29-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-31T19:10:47.682 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-31T19:10:47.702 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-tempora. 2026-03-31T19:10:47.705 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-natsort. 2026-03-31T19:10:47.709 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../30-python3-tempora_4.1.2-1_all.deb ... 2026-03-31T19:10:47.709 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-31T19:10:47.711 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../34-python3-natsort_8.0.2-1_all.deb ... 2026-03-31T19:10:47.712 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-31T19:10:47.728 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-portend. 2026-03-31T19:10:47.729 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-31T19:10:47.735 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../31-python3-portend_3.0.0-1_all.deb ... 2026-03-31T19:10:47.735 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../35-ceph-mgr-modules-core_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:47.736 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-31T19:10:47.736 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mgr-modules-core (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:47.756 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-31T19:10:47.763 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../32-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-31T19:10:47.764 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-31T19:10:47.773 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-31T19:10:47.779 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-31T19:10:47.780 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../36-libsqlite3-mod-ceph_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:47.781 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libsqlite3-mod-ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:47.785 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../33-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-31T19:10:47.786 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-31T19:10:47.798 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mgr. 2026-03-31T19:10:47.805 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../37-ceph-mgr_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:47.806 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mgr (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:47.816 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-natsort. 2026-03-31T19:10:47.822 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../34-python3-natsort_8.0.2-1_all.deb ... 2026-03-31T19:10:47.823 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-31T19:10:47.833 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mon. 2026-03-31T19:10:47.840 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-31T19:10:47.840 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../38-ceph-mon_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:47.841 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:47.845 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../35-ceph-mgr-modules-core_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:47.846 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-modules-core (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:47.854 INFO:teuthology.orchestra.run.vm06.stdout:Fetched 281 MB in 20s (13.7 MB/s) 2026-03-31T19:10:47.892 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-31T19:10:47.919 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-31T19:10:47.925 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../36-libsqlite3-mod-ceph_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:47.926 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libsqlite3-mod-ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:47.928 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 119262 files and directories currently installed.) 2026-03-31T19:10:47.930 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../00-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-31T19:10:47.932 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-31T19:10:47.933 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-31T19:10:47.941 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../39-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-31T19:10:47.942 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-31T19:10:47.946 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr. 2026-03-31T19:10:47.953 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-31T19:10:47.953 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../37-ceph-mgr_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:47.954 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:47.962 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../01-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-31T19:10:47.963 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-31T19:10:47.966 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-osd. 2026-03-31T19:10:47.972 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../40-ceph-osd_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:47.974 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-osd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:47.983 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-31T19:10:47.984 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mon. 2026-03-31T19:10:47.989 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../02-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-31T19:10:47.990 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../38-ceph-mon_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:47.990 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-31T19:10:47.991 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.011 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-31T19:10:48.018 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../03-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-31T19:10:48.023 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:48.082 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-31T19:10:48.083 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-31T19:10:48.088 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../04-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-31T19:10:48.089 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:48.090 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../39-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-31T19:10:48.091 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-31T19:10:48.108 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-31T19:10:48.110 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-osd. 2026-03-31T19:10:48.115 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../05-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-31T19:10:48.115 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:48.116 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../40-ceph-osd_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.118 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-osd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.141 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-31T19:10:48.148 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../06-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-31T19:10:48.149 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-31T19:10:48.173 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../07-librbd1_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.223 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking librbd1 (20.2.0-721-g5bb32787-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-31T19:10:48.235 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph. 2026-03-31T19:10:48.241 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../41-ceph_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.242 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.264 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-fuse. 2026-03-31T19:10:48.271 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../42-ceph-fuse_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.272 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.286 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../08-librados2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.289 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking librados2 (20.2.0-721-g5bb32787-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-31T19:10:48.364 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mds. 2026-03-31T19:10:48.371 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../43-ceph-mds_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.371 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mds (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.375 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph. 2026-03-31T19:10:48.380 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libnbd0. 2026-03-31T19:10:48.381 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../41-ceph_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.382 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.386 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../09-libnbd0_1.10.5-1_amd64.deb ... 2026-03-31T19:10:48.387 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-31T19:10:48.403 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-fuse. 2026-03-31T19:10:48.408 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libcephfs2. 2026-03-31T19:10:48.409 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../42-ceph-fuse_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.410 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.413 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package cephadm. 2026-03-31T19:10:48.414 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../10-libcephfs2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.415 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libcephfs2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.419 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../44-cephadm_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.420 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.439 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-rados. 2026-03-31T19:10:48.440 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mds. 2026-03-31T19:10:48.442 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-31T19:10:48.446 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../11-python3-rados_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.447 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../43-ceph-mds_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.447 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-rados (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.448 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mds (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.449 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../45-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-31T19:10:48.450 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-31T19:10:48.467 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-31T19:10:48.473 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../12-python3-ceph-argparse_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:48.477 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-ceph-argparse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.478 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-31T19:10:48.484 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../46-ceph-mgr-cephadm_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:48.485 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mgr-cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.490 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package cephadm. 2026-03-31T19:10:48.491 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-cephfs. 2026-03-31T19:10:48.496 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../44-cephadm_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.497 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../13-python3-cephfs_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.497 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.498 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-cephfs (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.513 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-31T19:10:48.516 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-31T19:10:48.517 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-31T19:10:48.520 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../47-python3-repoze.lru_0.7-2_all.deb ... 2026-03-31T19:10:48.521 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-31T19:10:48.522 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../45-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-31T19:10:48.523 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-31T19:10:48.524 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../14-python3-ceph-common_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:48.524 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.538 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-routes. 2026-03-31T19:10:48.544 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../48-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-31T19:10:48.545 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-31T19:10:48.547 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-31T19:10:48.550 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-31T19:10:48.553 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../15-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-31T19:10:48.554 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-31T19:10:48.556 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../46-ceph-mgr-cephadm_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:48.557 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.570 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-31T19:10:48.571 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-prettytable. 2026-03-31T19:10:48.577 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../49-ceph-mgr-dashboard_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:48.577 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../16-python3-prettytable_2.5.0-2_all.deb ... 2026-03-31T19:10:48.578 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mgr-dashboard (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.578 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-31T19:10:48.584 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-31T19:10:48.592 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../47-python3-repoze.lru_0.7-2_all.deb ... 2026-03-31T19:10:48.593 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-31T19:10:48.594 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-rbd. 2026-03-31T19:10:48.601 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../17-python3-rbd_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.601 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-rbd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.611 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-routes. 2026-03-31T19:10:48.617 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../48-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-31T19:10:48.618 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-31T19:10:48.621 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-31T19:10:48.627 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../18-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-31T19:10:48.628 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-31T19:10:48.644 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-31T19:10:48.650 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package librgw2. 2026-03-31T19:10:48.650 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../49-ceph-mgr-dashboard_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:48.651 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-dashboard (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.658 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../19-librgw2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.659 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking librgw2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.803 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-rgw. 2026-03-31T19:10:48.809 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../20-python3-rgw_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.810 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-rgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.828 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-31T19:10:48.836 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../21-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-31T19:10:48.837 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-31T19:10:48.854 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libradosstriper1. 2026-03-31T19:10:48.861 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../22-libradosstriper1_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.862 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libradosstriper1 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.884 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-common. 2026-03-31T19:10:48.891 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../23-ceph-common_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:48.892 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:48.912 INFO:teuthology.orchestra.run.vm01.stdout:Get:67 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 20.2.0-721-g5bb32787-1jammy [5416 kB] 2026-03-31T19:10:49.370 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-31T19:10:49.375 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-base. 2026-03-31T19:10:49.377 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../50-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-31T19:10:49.378 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-31T19:10:49.381 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../24-ceph-base_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:49.385 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-base (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:49.433 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-31T19:10:49.433 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-joblib. 2026-03-31T19:10:49.440 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../50-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-31T19:10:49.440 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../51-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-31T19:10:49.441 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-31T19:10:49.441 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-31T19:10:49.489 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-31T19:10:49.490 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-31T19:10:49.495 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../25-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-31T19:10:49.496 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-31T19:10:49.496 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../52-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-31T19:10:49.497 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-31T19:10:49.498 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-joblib. 2026-03-31T19:10:49.502 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../51-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-31T19:10:49.503 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-31T19:10:49.512 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-cheroot. 2026-03-31T19:10:49.515 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-sklearn. 2026-03-31T19:10:49.519 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../26-python3-cheroot_8.5.2+ds1-1ubuntu3.2_all.deb ... 2026-03-31T19:10:49.519 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.2) ... 2026-03-31T19:10:49.522 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../53-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-31T19:10:49.523 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-31T19:10:49.541 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-31T19:10:49.541 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-31T19:10:49.547 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../52-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-31T19:10:49.547 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../27-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-31T19:10:49.548 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-31T19:10:49.548 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-31T19:10:49.565 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-31T19:10:49.565 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn. 2026-03-31T19:10:49.572 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../28-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-31T19:10:49.572 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../53-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-31T19:10:49.572 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-31T19:10:49.572 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-31T19:10:49.589 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-31T19:10:49.596 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../29-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-31T19:10:49.603 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-31T19:10:49.619 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-tempora. 2026-03-31T19:10:49.626 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../30-python3-tempora_4.1.2-1_all.deb ... 2026-03-31T19:10:49.627 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-31T19:10:49.674 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-31T19:10:49.674 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-portend. 2026-03-31T19:10:49.681 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../31-python3-portend_3.0.0-1_all.deb ... 2026-03-31T19:10:49.682 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../54-ceph-mgr-diskprediction-local_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:49.683 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-31T19:10:49.684 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mgr-diskprediction-local (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:49.702 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-31T19:10:49.709 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../32-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-31T19:10:49.711 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-31T19:10:49.712 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-31T19:10:49.720 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../54-ceph-mgr-diskprediction-local_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:49.720 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-diskprediction-local (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:49.728 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-31T19:10:49.734 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../33-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-31T19:10:49.734 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-31T19:10:49.767 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-natsort. 2026-03-31T19:10:49.774 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../34-python3-natsort_8.0.2-1_all.deb ... 2026-03-31T19:10:49.775 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-31T19:10:49.795 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-31T19:10:49.802 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../35-ceph-mgr-modules-core_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:49.803 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mgr-modules-core (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:49.839 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-31T19:10:49.845 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../36-libsqlite3-mod-ceph_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:49.846 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libsqlite3-mod-ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:49.965 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mgr. 2026-03-31T19:10:49.971 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../37-ceph-mgr_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:49.972 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mgr (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:49.980 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cachetools. 2026-03-31T19:10:49.983 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-cachetools. 2026-03-31T19:10:49.986 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../55-python3-cachetools_5.0.0-1_all.deb ... 2026-03-31T19:10:49.986 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../55-python3-cachetools_5.0.0-1_all.deb ... 2026-03-31T19:10:49.987 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-31T19:10:49.987 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-31T19:10:49.998 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mon. 2026-03-31T19:10:50.004 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rsa. 2026-03-31T19:10:50.004 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-rsa. 2026-03-31T19:10:50.005 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../38-ceph-mon_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:50.005 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:50.010 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../56-python3-rsa_4.8-1_all.deb ... 2026-03-31T19:10:50.011 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-31T19:10:50.011 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../56-python3-rsa_4.8-1_all.deb ... 2026-03-31T19:10:50.012 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-31T19:10:50.030 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-google-auth. 2026-03-31T19:10:50.032 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-google-auth. 2026-03-31T19:10:50.036 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../57-python3-google-auth_1.5.1-3_all.deb ... 2026-03-31T19:10:50.037 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-31T19:10:50.038 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../57-python3-google-auth_1.5.1-3_all.deb ... 2026-03-31T19:10:50.039 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-31T19:10:50.057 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-31T19:10:50.059 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-31T19:10:50.063 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../58-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-31T19:10:50.065 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../58-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-31T19:10:50.079 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-31T19:10:50.081 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-31T19:10:50.096 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-31T19:10:50.097 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-websocket. 2026-03-31T19:10:50.099 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-websocket. 2026-03-31T19:10:50.104 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../59-python3-websocket_1.2.3-1_all.deb ... 2026-03-31T19:10:50.104 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../39-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-31T19:10:50.106 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../59-python3-websocket_1.2.3-1_all.deb ... 2026-03-31T19:10:50.131 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-31T19:10:50.131 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-31T19:10:50.131 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-31T19:10:50.138 INFO:teuthology.orchestra.run.vm01.stdout:Get:68 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 20.2.0-721-g5bb32787-1jammy [246 kB] 2026-03-31T19:10:50.174 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-31T19:10:50.174 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-31T19:10:50.175 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-osd. 2026-03-31T19:10:50.181 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../60-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-31T19:10:50.182 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-31T19:10:50.182 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../40-ceph-osd_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:50.183 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../60-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-31T19:10:50.184 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-osd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:50.184 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-31T19:10:50.213 INFO:teuthology.orchestra.run.vm01.stdout:Get:69 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 20.2.0-721-g5bb32787-1jammy [124 kB] 2026-03-31T19:10:50.248 INFO:teuthology.orchestra.run.vm01.stdout:Get:70 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 20.2.0-721-g5bb32787-1jammy [907 kB] 2026-03-31T19:10:50.328 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-31T19:10:50.334 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-31T19:10:50.335 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../61-ceph-mgr-k8sevents_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:50.336 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-k8sevents (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:50.341 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../61-ceph-mgr-k8sevents_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:50.343 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-mgr-k8sevents (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:50.353 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-31T19:10:50.359 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../62-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-31T19:10:50.360 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-31T19:10:50.361 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-31T19:10:50.368 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../62-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-31T19:10:50.421 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-31T19:10:50.431 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-31T19:10:50.437 INFO:teuthology.orchestra.run.vm01.stdout:Get:71 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 20.2.0-721-g5bb32787-1jammy [6393 kB] 2026-03-31T19:10:50.437 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph. 2026-03-31T19:10:50.437 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../63-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-31T19:10:50.438 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-31T19:10:50.441 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-31T19:10:50.443 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../41-ceph_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:50.444 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:50.448 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../63-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-31T19:10:50.449 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-31T19:10:50.455 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package jq. 2026-03-31T19:10:50.460 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-fuse. 2026-03-31T19:10:50.461 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../64-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-31T19:10:50.462 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-31T19:10:50.465 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package jq. 2026-03-31T19:10:50.466 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../42-ceph-fuse_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:50.467 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:50.471 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../64-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-31T19:10:50.472 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-31T19:10:50.487 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package socat. 2026-03-31T19:10:50.490 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package socat. 2026-03-31T19:10:50.495 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../65-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-31T19:10:50.495 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mds. 2026-03-31T19:10:50.496 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-31T19:10:50.497 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../65-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-31T19:10:50.498 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-31T19:10:50.502 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../43-ceph-mds_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:50.502 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mds (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:50.518 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package xmlstarlet. 2026-03-31T19:10:50.520 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package xmlstarlet. 2026-03-31T19:10:50.524 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../66-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-31T19:10:50.525 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-31T19:10:50.526 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../66-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-31T19:10:50.533 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-31T19:10:50.546 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package cephadm. 2026-03-31T19:10:50.552 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../44-cephadm_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:50.553 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:50.570 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-test. 2026-03-31T19:10:50.571 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-31T19:10:50.576 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../67-ceph-test_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:50.577 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-test (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:50.577 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../45-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-31T19:10:50.578 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-test. 2026-03-31T19:10:50.578 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-31T19:10:50.584 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../67-ceph-test_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:50.585 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-test (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:50.605 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-31T19:10:50.611 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../46-ceph-mgr-cephadm_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:50.612 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mgr-cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:50.639 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-31T19:10:50.646 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../47-python3-repoze.lru_0.7-2_all.deb ... 2026-03-31T19:10:50.647 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-31T19:10:50.665 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-routes. 2026-03-31T19:10:50.672 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../48-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-31T19:10:50.673 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-31T19:10:50.796 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-31T19:10:50.805 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../49-ceph-mgr-dashboard_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:50.806 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mgr-dashboard (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:51.448 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-31T19:10:51.456 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../50-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-31T19:10:51.457 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-31T19:10:51.514 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-joblib. 2026-03-31T19:10:51.521 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../51-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-31T19:10:51.522 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-31T19:10:51.763 INFO:teuthology.orchestra.run.vm01.stdout:Get:72 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 20.2.0-721-g5bb32787-1jammy [21.7 MB] 2026-03-31T19:10:52.017 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-31T19:10:52.023 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../52-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-31T19:10:52.034 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-31T19:10:52.165 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-sklearn. 2026-03-31T19:10:52.172 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../53-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-31T19:10:52.186 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-31T19:10:52.200 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-volume. 2026-03-31T19:10:52.207 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package ceph-volume. 2026-03-31T19:10:52.207 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../68-ceph-volume_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:52.209 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-volume (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:52.212 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../68-ceph-volume_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:52.213 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking ceph-volume (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:52.237 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs-daemon. 2026-03-31T19:10:52.243 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libcephfs-daemon. 2026-03-31T19:10:52.243 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../69-libcephfs-daemon_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:52.245 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs-daemon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:52.249 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../69-libcephfs-daemon_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:52.250 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libcephfs-daemon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:52.270 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs-proxy2. 2026-03-31T19:10:52.276 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../70-libcephfs-proxy2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:52.277 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs-proxy2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:52.279 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libcephfs-proxy2. 2026-03-31T19:10:52.284 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../70-libcephfs-proxy2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:52.286 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libcephfs-proxy2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:52.292 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-31T19:10:52.298 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../71-libcephfs-dev_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:52.299 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs-dev (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:52.303 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-31T19:10:52.309 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../71-libcephfs-dev_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:52.310 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-31T19:10:52.310 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking libcephfs-dev (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:52.310 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../54-ceph-mgr-diskprediction-local_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:52.311 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mgr-diskprediction-local (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:52.318 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package nvme-cli. 2026-03-31T19:10:52.325 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../72-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-31T19:10:52.326 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-31T19:10:52.331 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package nvme-cli. 2026-03-31T19:10:52.336 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../72-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-31T19:10:52.336 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-31T19:10:52.365 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-31T19:10:52.372 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../73-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-31T19:10:52.373 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-31T19:10:52.376 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-31T19:10:52.385 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../73-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-31T19:10:52.386 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-31T19:10:52.418 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-31T19:10:52.425 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../74-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-31T19:10:52.426 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-31T19:10:52.435 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-31T19:10:52.440 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../74-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-31T19:10:52.441 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-31T19:10:52.441 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pluggy. 2026-03-31T19:10:52.448 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../75-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-31T19:10:52.449 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-31T19:10:52.459 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-pluggy. 2026-03-31T19:10:52.465 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../75-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-31T19:10:52.467 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-31T19:10:52.468 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-psutil. 2026-03-31T19:10:52.475 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../76-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-31T19:10:52.476 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-31T19:10:52.542 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-psutil. 2026-03-31T19:10:52.543 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-py. 2026-03-31T19:10:52.549 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../76-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-31T19:10:52.549 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../77-python3-py_1.10.0-1_all.deb ... 2026-03-31T19:10:52.550 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-31T19:10:52.550 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-31T19:10:52.556 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-cachetools. 2026-03-31T19:10:52.562 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../55-python3-cachetools_5.0.0-1_all.deb ... 2026-03-31T19:10:52.563 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-31T19:10:52.574 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pygments. 2026-03-31T19:10:52.575 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-py. 2026-03-31T19:10:52.578 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-rsa. 2026-03-31T19:10:52.580 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../77-python3-py_1.10.0-1_all.deb ... 2026-03-31T19:10:52.580 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../78-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-31T19:10:52.581 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-31T19:10:52.581 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-31T19:10:52.586 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../56-python3-rsa_4.8-1_all.deb ... 2026-03-31T19:10:52.587 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-31T19:10:52.606 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-pygments. 2026-03-31T19:10:52.608 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-google-auth. 2026-03-31T19:10:52.613 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../78-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-31T19:10:52.616 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../57-python3-google-auth_1.5.1-3_all.deb ... 2026-03-31T19:10:52.617 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-31T19:10:52.617 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-31T19:10:52.637 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-31T19:10:52.642 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-toml. 2026-03-31T19:10:52.644 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../58-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-31T19:10:52.644 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-31T19:10:52.649 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../79-python3-toml_0.10.2-1_all.deb ... 2026-03-31T19:10:52.651 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-31T19:10:52.663 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-websocket. 2026-03-31T19:10:52.669 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../59-python3-websocket_1.2.3-1_all.deb ... 2026-03-31T19:10:52.670 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pytest. 2026-03-31T19:10:52.670 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-31T19:10:52.675 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../80-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-31T19:10:52.675 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-toml. 2026-03-31T19:10:52.676 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-31T19:10:52.680 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../79-python3-toml_0.10.2-1_all.deb ... 2026-03-31T19:10:52.681 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-31T19:10:52.690 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-31T19:10:52.696 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../60-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-31T19:10:52.697 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-31T19:10:52.699 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-pytest. 2026-03-31T19:10:52.706 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../80-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-31T19:10:52.707 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-31T19:10:52.722 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-simplejson. 2026-03-31T19:10:52.722 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../81-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-31T19:10:52.723 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-31T19:10:52.746 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-webob. 2026-03-31T19:10:52.750 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../82-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-31T19:10:52.751 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-31T19:10:52.751 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-simplejson. 2026-03-31T19:10:52.755 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../81-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-31T19:10:52.756 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-31T19:10:52.774 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-31T19:10:52.778 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../83-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-31T19:10:52.779 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-webob. 2026-03-31T19:10:52.783 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../82-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-31T19:10:52.791 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-31T19:10:52.791 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-31T19:10:52.811 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-31T19:10:52.814 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../83-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-31T19:10:52.816 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-31T19:10:52.828 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-31T19:10:52.836 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../61-ceph-mgr-k8sevents_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:52.837 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-mgr-k8sevents (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:52.864 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-31T19:10:52.870 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../62-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-31T19:10:52.871 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-31T19:10:52.902 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-31T19:10:52.902 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package radosgw. 2026-03-31T19:10:52.906 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../84-radosgw_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:52.907 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking radosgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:52.908 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../63-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-31T19:10:52.909 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-31T19:10:52.915 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package radosgw. 2026-03-31T19:10:52.920 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../84-radosgw_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:52.922 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking radosgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:52.924 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package jq. 2026-03-31T19:10:52.931 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../64-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-31T19:10:52.932 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-31T19:10:52.947 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package socat. 2026-03-31T19:10:52.954 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../65-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-31T19:10:52.955 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-31T19:10:52.979 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package xmlstarlet. 2026-03-31T19:10:52.985 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../66-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-31T19:10:52.986 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-31T19:10:53.055 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-test. 2026-03-31T19:10:53.063 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../67-ceph-test_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:53.065 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-test (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:53.289 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package rbd-fuse. 2026-03-31T19:10:53.291 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package rbd-fuse. 2026-03-31T19:10:53.294 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../85-rbd-fuse_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:53.295 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking rbd-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:53.295 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../85-rbd-fuse_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:53.296 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking rbd-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:53.314 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package smartmontools. 2026-03-31T19:10:53.317 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package smartmontools. 2026-03-31T19:10:53.320 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../86-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-31T19:10:53.324 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../86-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-31T19:10:53.329 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-31T19:10:53.334 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-31T19:10:53.371 INFO:teuthology.orchestra.run.vm03.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-31T19:10:53.373 INFO:teuthology.orchestra.run.vm05.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-31T19:10:53.619 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-31T19:10:53.619 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-31T19:10:53.642 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-31T19:10:53.642 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-31T19:10:53.969 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-31T19:10:54.067 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-31T19:10:54.095 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-31T19:10:54.314 INFO:teuthology.orchestra.run.vm05.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-31T19:10:54.320 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-31T19:10:54.324 INFO:teuthology.orchestra.run.vm03.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-31T19:10:54.341 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package ceph-volume. 2026-03-31T19:10:54.347 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../68-ceph-volume_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:10:54.348 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking ceph-volume (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:54.374 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libcephfs-daemon. 2026-03-31T19:10:54.380 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../69-libcephfs-daemon_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:54.381 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libcephfs-daemon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:54.387 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-31T19:10:54.390 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-31T19:10:54.397 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libcephfs-proxy2. 2026-03-31T19:10:54.403 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../70-libcephfs-proxy2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:54.403 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libcephfs-proxy2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:54.418 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-31T19:10:54.425 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../71-libcephfs-dev_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:54.426 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking libcephfs-dev (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:54.445 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package nvme-cli. 2026-03-31T19:10:54.451 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../72-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-31T19:10:54.452 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-31T19:10:54.493 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-31T19:10:54.500 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../73-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-31T19:10:54.501 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-31T19:10:54.544 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-31T19:10:54.552 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../74-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-31T19:10:54.552 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-31T19:10:54.569 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-pluggy. 2026-03-31T19:10:54.576 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../75-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-31T19:10:54.577 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-31T19:10:54.593 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-psutil. 2026-03-31T19:10:54.600 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../76-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-31T19:10:54.601 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-31T19:10:54.610 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-31T19:10:54.621 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-py. 2026-03-31T19:10:54.627 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../77-python3-py_1.10.0-1_all.deb ... 2026-03-31T19:10:54.628 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-31T19:10:54.631 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-31T19:10:54.660 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-pygments. 2026-03-31T19:10:54.667 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../78-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-31T19:10:54.668 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-31T19:10:54.724 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-toml. 2026-03-31T19:10:54.730 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../79-python3-toml_0.10.2-1_all.deb ... 2026-03-31T19:10:54.735 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-31T19:10:54.751 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-pytest. 2026-03-31T19:10:54.756 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../80-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-31T19:10:54.757 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-31T19:10:54.795 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-simplejson. 2026-03-31T19:10:54.801 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../81-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-31T19:10:54.802 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-31T19:10:54.823 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-webob. 2026-03-31T19:10:54.829 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../82-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-31T19:10:54.830 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-31T19:10:54.848 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-31T19:10:54.854 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../83-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-31T19:10:54.855 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-31T19:10:54.953 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package radosgw. 2026-03-31T19:10:54.959 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../84-radosgw_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:54.959 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking radosgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:55.001 INFO:teuthology.orchestra.run.vm03.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-31T19:10:55.004 INFO:teuthology.orchestra.run.vm05.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-31T19:10:55.014 INFO:teuthology.orchestra.run.vm01.stdout:Get:73 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 20.2.0-721-g5bb32787-1jammy [14.1 kB] 2026-03-31T19:10:55.014 INFO:teuthology.orchestra.run.vm01.stdout:Get:74 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 20.2.0-721-g5bb32787-1jammy [955 kB] 2026-03-31T19:10:55.021 INFO:teuthology.orchestra.run.vm03.stdout:Setting up cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:55.022 INFO:teuthology.orchestra.run.vm05.stdout:Setting up cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:55.078 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user cephadm....done 2026-03-31T19:10:55.079 INFO:teuthology.orchestra.run.vm05.stdout:Adding system user cephadm....done 2026-03-31T19:10:55.087 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-31T19:10:55.089 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-31T19:10:55.100 INFO:teuthology.orchestra.run.vm01.stdout:Get:75 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 20.2.0-721-g5bb32787-1jammy [2341 kB] 2026-03-31T19:10:55.160 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-31T19:10:55.161 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-31T19:10:55.162 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-31T19:10:55.163 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-31T19:10:55.286 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-31T19:10:55.295 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-31T19:10:55.367 INFO:teuthology.orchestra.run.vm01.stdout:Get:76 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 20.2.0-721-g5bb32787-1jammy [1049 kB] 2026-03-31T19:10:55.368 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package rbd-fuse. 2026-03-31T19:10:55.377 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../85-rbd-fuse_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:10:55.378 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking rbd-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:55.402 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package smartmontools. 2026-03-31T19:10:55.405 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-31T19:10:55.407 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../86-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-31T19:10:55.407 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-31T19:10:55.416 INFO:teuthology.orchestra.run.vm05.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-31T19:10:55.417 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-31T19:10:55.418 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-31T19:10:55.459 INFO:teuthology.orchestra.run.vm06.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-31T19:10:55.479 INFO:teuthology.orchestra.run.vm01.stdout:Get:77 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 20.2.0-721-g5bb32787-1jammy [179 kB] 2026-03-31T19:10:55.480 INFO:teuthology.orchestra.run.vm01.stdout:Get:78 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 20.2.0-721-g5bb32787-1jammy [45.5 MB] 2026-03-31T19:10:55.500 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-31T19:10:55.511 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-31T19:10:55.623 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-31T19:10:55.632 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-31T19:10:55.692 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-31T19:10:55.702 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-31T19:10:55.716 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-31T19:10:55.716 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-31T19:10:55.760 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-argparse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:55.771 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-ceph-argparse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:55.833 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-31T19:10:55.836 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-31T19:10:55.838 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-31T19:10:55.840 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-31T19:10:55.843 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-31T19:10:55.845 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-31T19:10:55.847 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-31T19:10:55.849 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-31T19:10:55.851 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-31T19:10:55.853 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-31T19:10:55.962 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-31T19:10:55.975 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-31T19:10:56.038 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs-proxy2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:56.040 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-31T19:10:56.051 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libcephfs-proxy2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:56.053 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-31T19:10:56.089 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-31T19:10:56.112 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-31T19:10:56.125 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-31T19:10:56.158 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-31T19:10:56.160 INFO:teuthology.orchestra.run.vm06.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-31T19:10:56.194 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-31T19:10:56.209 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-31T19:10:56.226 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-31T19:10:56.471 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-31T19:10:56.480 INFO:teuthology.orchestra.run.vm03.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-31T19:10:56.483 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-31T19:10:56.506 INFO:teuthology.orchestra.run.vm05.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-31T19:10:56.508 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-31T19:10:56.575 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-31T19:10:56.602 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-31T19:10:56.713 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.2) ... 2026-03-31T19:10:56.743 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.2) ... 2026-03-31T19:10:56.801 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-31T19:10:56.836 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-31T19:10:56.868 INFO:teuthology.orchestra.run.vm03.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-31T19:10:56.871 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:56.891 INFO:teuthology.orchestra.run.vm06.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-31T19:10:56.908 INFO:teuthology.orchestra.run.vm05.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-31T19:10:56.909 INFO:teuthology.orchestra.run.vm06.stdout:Setting up cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:56.910 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:56.956 INFO:teuthology.orchestra.run.vm06.stdout:Adding system user cephadm....done 2026-03-31T19:10:56.966 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-31T19:10:56.976 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-31T19:10:57.017 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-31T19:10:57.038 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-31T19:10:57.040 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-31T19:10:57.109 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-31T19:10:57.179 INFO:teuthology.orchestra.run.vm06.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-31T19:10:57.182 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-31T19:10:57.275 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-31T19:10:57.404 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-31T19:10:57.478 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-31T19:10:57.548 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:57.553 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-31T19:10:57.555 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-ceph-argparse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:57.586 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:57.592 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-31T19:10:57.624 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-31T19:10:57.627 INFO:teuthology.orchestra.run.vm03.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-31T19:10:57.628 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-31T19:10:57.630 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-31T19:10:57.630 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-31T19:10:57.633 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-31T19:10:57.635 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-31T19:10:57.637 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-31T19:10:57.666 INFO:teuthology.orchestra.run.vm05.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-31T19:10:57.669 INFO:teuthology.orchestra.run.vm05.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-31T19:10:57.671 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-31T19:10:57.706 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-31T19:10:57.752 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-31T19:10:57.776 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-31T19:10:57.776 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:57.779 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-31T19:10:57.823 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:57.826 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-31T19:10:57.856 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libcephfs-proxy2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:57.858 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-31T19:10:57.859 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-31T19:10:57.905 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-31T19:10:57.930 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-31T19:10:57.932 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-31T19:10:57.979 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-31T19:10:58.009 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-31T19:10:58.011 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-31T19:10:58.015 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-31T19:10:58.052 INFO:teuthology.orchestra.run.vm05.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-31T19:10:58.056 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-31T19:10:58.090 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-31T19:10:58.092 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-31T19:10:58.140 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-31T19:10:58.143 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-31T19:10:58.171 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-31T19:10:58.218 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-31T19:10:58.264 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-31T19:10:58.294 INFO:teuthology.orchestra.run.vm06.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-31T19:10:58.296 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-31T19:10:58.307 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-31T19:10:58.332 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-31T19:10:58.334 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-31T19:10:58.379 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-31T19:10:58.381 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-31T19:10:58.394 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-31T19:10:58.479 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-31T19:10:58.526 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-31T19:10:58.540 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.2) ... 2026-03-31T19:10:58.547 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:58.550 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-31T19:10:58.600 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:58.603 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-31T19:10:58.629 INFO:teuthology.orchestra.run.vm03.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-31T19:10:58.631 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-31T19:10:58.636 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-31T19:10:58.684 INFO:teuthology.orchestra.run.vm05.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-31T19:10:58.687 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-31T19:10:58.712 INFO:teuthology.orchestra.run.vm06.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-31T19:10:58.715 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:58.780 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-31T19:10:58.782 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librados2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:58.785 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librgw2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:58.787 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libsqlite3-mod-ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:58.790 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-31T19:10:58.822 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-31T19:10:58.839 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-31T19:10:58.841 INFO:teuthology.orchestra.run.vm05.stdout:Setting up librados2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:58.843 INFO:teuthology.orchestra.run.vm05.stdout:Setting up librgw2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:58.846 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libsqlite3-mod-ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:58.848 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-31T19:10:59.148 INFO:teuthology.orchestra.run.vm01.stdout:Get:79 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 20.2.0-721-g5bb32787-1jammy [8625 kB] 2026-03-31T19:10:59.369 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:59.371 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libradosstriper1 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:59.374 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librbd1 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:59.374 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:59.376 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-modules-core (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:59.378 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:59.380 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-31T19:10:59.402 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libcephfs2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:59.404 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libradosstriper1 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:59.407 INFO:teuthology.orchestra.run.vm05.stdout:Setting up librbd1 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:59.409 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mgr-modules-core (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:59.412 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:10:59.446 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-31T19:10:59.446 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-31T19:10:59.452 INFO:teuthology.orchestra.run.vm06.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-31T19:10:59.455 INFO:teuthology.orchestra.run.vm06.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-31T19:10:59.458 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-31T19:10:59.480 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-31T19:10:59.481 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-31T19:10:59.535 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-31T19:10:59.612 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:10:59.614 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-31T19:10:59.802 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-31T19:10:59.862 INFO:teuthology.orchestra.run.vm01.stdout:Get:80 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 20.2.0-721-g5bb32787-1jammy [14.2 kB] 2026-03-31T19:10:59.862 INFO:teuthology.orchestra.run.vm01.stdout:Get:81 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 20.2.0-721-g5bb32787-1jammy [99.5 MB] 2026-03-31T19:11:00.013 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-31T19:11:00.088 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs-dev (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.089 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libcephfs-dev (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.093 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rados (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.094 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-rados (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.097 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs-daemon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.098 INFO:teuthology.orchestra.run.vm05.stdout:Setting up libcephfs-daemon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.100 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rbd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.101 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-rbd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.103 INFO:teuthology.orchestra.run.vm03.stdout:Setting up rbd-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.104 INFO:teuthology.orchestra.run.vm05.stdout:Setting up rbd-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.106 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.107 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-rgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.109 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cephfs (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.110 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-cephfs (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.111 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.113 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.147 INFO:teuthology.orchestra.run.vm03.stdout:Adding group ceph....done 2026-03-31T19:11:00.148 INFO:teuthology.orchestra.run.vm05.stdout:Adding group ceph....done 2026-03-31T19:11:00.172 INFO:teuthology.orchestra.run.vm06.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-31T19:11:00.174 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-31T19:11:00.185 INFO:teuthology.orchestra.run.vm05.stdout:Adding system user ceph....done 2026-03-31T19:11:00.190 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user ceph....done 2026-03-31T19:11:00.195 INFO:teuthology.orchestra.run.vm05.stdout:Setting system user ceph properties....done 2026-03-31T19:11:00.199 INFO:teuthology.orchestra.run.vm05.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-31T19:11:00.201 INFO:teuthology.orchestra.run.vm03.stdout:Setting system user ceph properties....done 2026-03-31T19:11:00.206 INFO:teuthology.orchestra.run.vm03.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-31T19:11:00.257 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-31T19:11:00.260 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-31T19:11:00.268 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-31T19:11:00.275 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-31T19:11:00.329 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-31T19:11:00.428 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-31T19:11:00.499 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-31T19:11:00.501 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-31T19:11:00.510 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-31T19:11:00.522 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-31T19:11:00.638 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-31T19:11:00.706 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:11:00.708 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-31T19:11:00.793 INFO:teuthology.orchestra.run.vm06.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-31T19:11:00.795 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-31T19:11:00.892 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-test (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.895 INFO:teuthology.orchestra.run.vm05.stdout:Setting up radosgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.930 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-31T19:11:00.932 INFO:teuthology.orchestra.run.vm06.stdout:Setting up librados2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.934 INFO:teuthology.orchestra.run.vm06.stdout:Setting up librgw2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.937 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libsqlite3-mod-ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.939 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-31T19:11:00.948 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-test (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:00.950 INFO:teuthology.orchestra.run.vm03.stdout:Setting up radosgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.133 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-31T19:11:01.133 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-31T19:11:01.186 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-31T19:11:01.186 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-31T19:11:01.503 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libcephfs2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.505 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libradosstriper1 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.507 INFO:teuthology.orchestra.run.vm06.stdout:Setting up librbd1 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.509 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mgr-modules-core (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.511 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.517 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-base (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.542 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-base (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.574 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-31T19:11:01.574 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-31T19:11:01.604 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-31T19:11:01.631 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-31T19:11:01.953 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libcephfs-dev (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.955 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-rados (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.957 INFO:teuthology.orchestra.run.vm06.stdout:Setting up libcephfs-daemon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.959 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-rbd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.962 INFO:teuthology.orchestra.run.vm06.stdout:Setting up rbd-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.964 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-rgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.966 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-cephfs (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.969 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.988 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mds (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:01.998 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mds (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:02.004 INFO:teuthology.orchestra.run.vm06.stdout:Adding group ceph....done 2026-03-31T19:11:02.043 INFO:teuthology.orchestra.run.vm06.stdout:Adding system user ceph....done 2026-03-31T19:11:02.052 INFO:teuthology.orchestra.run.vm06.stdout:Setting system user ceph properties....done 2026-03-31T19:11:02.052 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-31T19:11:02.053 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-31T19:11:02.057 INFO:teuthology.orchestra.run.vm06.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-31T19:11:02.064 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-31T19:11:02.064 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-31T19:11:02.126 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-31T19:11:02.360 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-31T19:11:02.416 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:02.430 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mgr (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:02.491 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-31T19:11:02.491 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-31T19:11:02.507 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-31T19:11:02.507 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-31T19:11:02.820 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-test (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:02.823 INFO:teuthology.orchestra.run.vm06.stdout:Setting up radosgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:02.871 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-osd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:02.881 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-osd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:02.963 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-31T19:11:02.963 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-31T19:11:02.978 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-31T19:11:02.978 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-31T19:11:03.062 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-31T19:11:03.062 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-31T19:11:03.363 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mgr-k8sevents (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.363 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-k8sevents (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.366 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-diskprediction-local (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.366 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mgr-diskprediction-local (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.379 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.380 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.424 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-base (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.442 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-31T19:11:03.442 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-31T19:11:03.444 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-31T19:11:03.444 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-31T19:11:03.517 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-31T19:11:03.772 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mgr-cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.784 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.787 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-mgr-dashboard (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.799 INFO:teuthology.orchestra.run.vm05.stdout:Setting up ceph-volume (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.859 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.872 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.874 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-dashboard (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.889 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-volume (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:03.930 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-31T19:11:03.948 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mds (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:04.009 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-31T19:11:04.014 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-31T19:11:04.014 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-31T19:11:04.014 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-31T19:11:04.099 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-31T19:11:04.323 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:04.324 INFO:teuthology.orchestra.run.vm05.stdout:Running kernel seems to be up-to-date. 2026-03-31T19:11:04.324 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:04.324 INFO:teuthology.orchestra.run.vm05.stdout:Services to be restarted: 2026-03-31T19:11:04.327 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart apache-htcacheclean.service 2026-03-31T19:11:04.333 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart rsyslog.service 2026-03-31T19:11:04.336 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:04.337 INFO:teuthology.orchestra.run.vm05.stdout:Service restarts being deferred: 2026-03-31T19:11:04.337 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart networkd-dispatcher.service 2026-03-31T19:11:04.337 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart unattended-upgrades.service 2026-03-31T19:11:04.337 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:04.337 INFO:teuthology.orchestra.run.vm05.stdout:No containers need to be restarted. 2026-03-31T19:11:04.337 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:04.337 INFO:teuthology.orchestra.run.vm05.stdout:No user sessions are running outdated binaries. 2026-03-31T19:11:04.337 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:04.337 INFO:teuthology.orchestra.run.vm05.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-31T19:11:04.409 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:04.409 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-31T19:11:04.409 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:04.409 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-31T19:11:04.411 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart apache-htcacheclean.service 2026-03-31T19:11:04.415 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mgr (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:04.416 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart rsyslog.service 2026-03-31T19:11:04.419 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:04.420 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-31T19:11:04.420 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-31T19:11:04.420 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-31T19:11:04.420 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:04.420 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-31T19:11:04.420 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:04.420 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-31T19:11:04.420 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:04.420 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-31T19:11:04.487 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-31T19:11:04.488 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-31T19:11:04.930 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-osd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:05.014 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-31T19:11:05.015 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-31T19:11:05.276 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T19:11:05.279 DEBUG:teuthology.orchestra.run.vm05:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-jmespath python3-xmltodict s3cmd 2026-03-31T19:11:05.354 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T19:11:05.383 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mgr-k8sevents (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:05.384 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T19:11:05.385 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mgr-diskprediction-local (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:05.387 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-jmespath python3-xmltodict s3cmd 2026-03-31T19:11:05.399 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:05.461 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-31T19:11:05.461 INFO:teuthology.orchestra.run.vm06.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-31T19:11:05.465 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T19:11:05.544 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T19:11:05.544 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T19:11:05.654 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T19:11:05.654 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T19:11:05.687 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T19:11:05.687 INFO:teuthology.orchestra.run.vm05.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T19:11:05.687 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T19:11:05.687 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T19:11:05.705 INFO:teuthology.orchestra.run.vm05.stdout:The following NEW packages will be installed: 2026-03-31T19:11:05.705 INFO:teuthology.orchestra.run.vm05.stdout: python3-jmespath python3-xmltodict s3cmd 2026-03-31T19:11:05.731 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 3 newly installed, 0 to remove and 50 not upgraded. 2026-03-31T19:11:05.731 INFO:teuthology.orchestra.run.vm05.stdout:Need to get 155 kB of archives. 2026-03-31T19:11:05.731 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 678 kB of additional disk space will be used. 2026-03-31T19:11:05.731 INFO:teuthology.orchestra.run.vm05.stdout:Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-31T19:11:05.747 INFO:teuthology.orchestra.run.vm05.stdout:Get:2 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-31T19:11:05.748 INFO:teuthology.orchestra.run.vm05.stdout:Get:3 http://archive.ubuntu.com/ubuntu jammy/universe amd64 s3cmd all 2.2.0-1 [120 kB] 2026-03-31T19:11:05.792 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T19:11:05.792 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T19:11:05.793 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T19:11:05.793 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T19:11:05.808 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-31T19:11:05.808 INFO:teuthology.orchestra.run.vm03.stdout: python3-jmespath python3-xmltodict s3cmd 2026-03-31T19:11:05.808 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mgr-cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:05.821 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:05.824 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-mgr-dashboard (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:05.832 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 3 newly installed, 0 to remove and 50 not upgraded. 2026-03-31T19:11:05.832 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 155 kB of archives. 2026-03-31T19:11:05.832 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 678 kB of additional disk space will be used. 2026-03-31T19:11:05.832 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-31T19:11:05.836 INFO:teuthology.orchestra.run.vm06.stdout:Setting up ceph-volume (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:05.848 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-31T19:11:05.849 INFO:teuthology.orchestra.run.vm03.stdout:Get:3 http://archive.ubuntu.com/ubuntu jammy/universe amd64 s3cmd all 2.2.0-1 [120 kB] 2026-03-31T19:11:05.954 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-31T19:11:05.962 INFO:teuthology.orchestra.run.vm05.stdout:Fetched 155 kB in 0s (2815 kB/s) 2026-03-31T19:11:05.977 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-jmespath. 2026-03-31T19:11:06.011 INFO:teuthology.orchestra.run.vm05.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 126082 files and directories currently installed.) 2026-03-31T19:11:06.013 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-31T19:11:06.014 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-31T19:11:06.032 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-31T19:11:06.032 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-31T19:11:06.039 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-31T19:11:06.039 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-31T19:11:06.057 INFO:teuthology.orchestra.run.vm05.stdout:Selecting previously unselected package s3cmd. 2026-03-31T19:11:06.063 INFO:teuthology.orchestra.run.vm05.stdout:Preparing to unpack .../archives/s3cmd_2.2.0-1_all.deb ... 2026-03-31T19:11:06.064 INFO:teuthology.orchestra.run.vm05.stdout:Unpacking s3cmd (2.2.0-1) ... 2026-03-31T19:11:06.066 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 155 kB in 0s (2742 kB/s) 2026-03-31T19:11:06.086 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jmespath. 2026-03-31T19:11:06.100 INFO:teuthology.orchestra.run.vm05.stdout:Setting up s3cmd (2.2.0-1) ... 2026-03-31T19:11:06.121 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 126082 files and directories currently installed.) 2026-03-31T19:11:06.123 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-31T19:11:06.125 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-31T19:11:06.144 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-31T19:11:06.151 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-31T19:11:06.152 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-31T19:11:06.172 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package s3cmd. 2026-03-31T19:11:06.179 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../archives/s3cmd_2.2.0-1_all.deb ... 2026-03-31T19:11:06.180 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking s3cmd (2.2.0-1) ... 2026-03-31T19:11:06.193 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-31T19:11:06.214 INFO:teuthology.orchestra.run.vm03.stdout:Setting up s3cmd (2.2.0-1) ... 2026-03-31T19:11:06.258 INFO:teuthology.orchestra.run.vm05.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-31T19:11:06.306 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-31T19:11:06.333 INFO:teuthology.orchestra.run.vm05.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-31T19:11:06.343 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:06.343 INFO:teuthology.orchestra.run.vm06.stdout:Running kernel seems to be up-to-date. 2026-03-31T19:11:06.343 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:06.343 INFO:teuthology.orchestra.run.vm06.stdout:Services to be restarted: 2026-03-31T19:11:06.346 INFO:teuthology.orchestra.run.vm06.stdout: systemctl restart apache-htcacheclean.service 2026-03-31T19:11:06.351 INFO:teuthology.orchestra.run.vm06.stdout: systemctl restart rsyslog.service 2026-03-31T19:11:06.354 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:06.354 INFO:teuthology.orchestra.run.vm06.stdout:Service restarts being deferred: 2026-03-31T19:11:06.355 INFO:teuthology.orchestra.run.vm06.stdout: systemctl restart networkd-dispatcher.service 2026-03-31T19:11:06.355 INFO:teuthology.orchestra.run.vm06.stdout: systemctl restart unattended-upgrades.service 2026-03-31T19:11:06.355 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:06.355 INFO:teuthology.orchestra.run.vm06.stdout:No containers need to be restarted. 2026-03-31T19:11:06.355 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:06.355 INFO:teuthology.orchestra.run.vm06.stdout:No user sessions are running outdated binaries. 2026-03-31T19:11:06.355 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:06.355 INFO:teuthology.orchestra.run.vm06.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-31T19:11:06.376 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-31T19:11:06.452 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-31T19:11:06.659 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:06.659 INFO:teuthology.orchestra.run.vm05.stdout:Running kernel seems to be up-to-date. 2026-03-31T19:11:06.659 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:06.659 INFO:teuthology.orchestra.run.vm05.stdout:Services to be restarted: 2026-03-31T19:11:06.663 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart apache-htcacheclean.service 2026-03-31T19:11:06.669 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart rsyslog.service 2026-03-31T19:11:06.672 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:06.672 INFO:teuthology.orchestra.run.vm05.stdout:Service restarts being deferred: 2026-03-31T19:11:06.672 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart networkd-dispatcher.service 2026-03-31T19:11:06.672 INFO:teuthology.orchestra.run.vm05.stdout: systemctl restart unattended-upgrades.service 2026-03-31T19:11:06.672 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:06.672 INFO:teuthology.orchestra.run.vm05.stdout:No containers need to be restarted. 2026-03-31T19:11:06.672 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:06.672 INFO:teuthology.orchestra.run.vm05.stdout:No user sessions are running outdated binaries. 2026-03-31T19:11:06.672 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:06.672 INFO:teuthology.orchestra.run.vm05.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-31T19:11:06.770 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:06.770 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-31T19:11:06.770 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:06.770 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-31T19:11:06.773 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart apache-htcacheclean.service 2026-03-31T19:11:06.778 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart rsyslog.service 2026-03-31T19:11:06.781 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:06.781 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-31T19:11:06.781 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-31T19:11:06.781 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-31T19:11:06.781 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:06.781 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-31T19:11:06.781 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:06.781 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-31T19:11:06.781 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:06.781 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-31T19:11:07.210 INFO:teuthology.orchestra.run.vm01.stdout:Get:82 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 20.2.0-721-g5bb32787-1jammy [135 kB] 2026-03-31T19:11:07.211 INFO:teuthology.orchestra.run.vm01.stdout:Get:83 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-daemon amd64 20.2.0-721-g5bb32787-1jammy [43.2 kB] 2026-03-31T19:11:07.211 INFO:teuthology.orchestra.run.vm01.stdout:Get:84 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-proxy2 amd64 20.2.0-721-g5bb32787-1jammy [30.7 kB] 2026-03-31T19:11:07.212 INFO:teuthology.orchestra.run.vm01.stdout:Get:85 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 20.2.0-721-g5bb32787-1jammy [41.4 kB] 2026-03-31T19:11:07.212 INFO:teuthology.orchestra.run.vm01.stdout:Get:86 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 20.2.0-721-g5bb32787-1jammy [25.1 MB] 2026-03-31T19:11:07.316 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T19:11:07.319 DEBUG:teuthology.orchestra.run.vm06:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-jmespath python3-xmltodict s3cmd 2026-03-31T19:11:07.399 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T19:11:07.563 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T19:11:07.564 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T19:11:07.637 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T19:11:07.640 DEBUG:teuthology.parallel:result is None 2026-03-31T19:11:07.689 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T19:11:07.690 INFO:teuthology.orchestra.run.vm06.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T19:11:07.690 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T19:11:07.690 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T19:11:07.710 INFO:teuthology.orchestra.run.vm06.stdout:The following NEW packages will be installed: 2026-03-31T19:11:07.710 INFO:teuthology.orchestra.run.vm06.stdout: python3-jmespath python3-xmltodict s3cmd 2026-03-31T19:11:07.715 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T19:11:07.719 DEBUG:teuthology.parallel:result is None 2026-03-31T19:11:07.939 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 3 newly installed, 0 to remove and 50 not upgraded. 2026-03-31T19:11:07.940 INFO:teuthology.orchestra.run.vm06.stdout:Need to get 155 kB of archives. 2026-03-31T19:11:07.940 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 678 kB of additional disk space will be used. 2026-03-31T19:11:07.940 INFO:teuthology.orchestra.run.vm06.stdout:Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-31T19:11:08.164 INFO:teuthology.orchestra.run.vm06.stdout:Get:2 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-31T19:11:08.186 INFO:teuthology.orchestra.run.vm06.stdout:Get:3 http://archive.ubuntu.com/ubuntu jammy/universe amd64 s3cmd all 2.2.0-1 [120 kB] 2026-03-31T19:11:08.600 INFO:teuthology.orchestra.run.vm06.stdout:Fetched 155 kB in 1s (221 kB/s) 2026-03-31T19:11:08.614 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-jmespath. 2026-03-31T19:11:08.642 INFO:teuthology.orchestra.run.vm06.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 126082 files and directories currently installed.) 2026-03-31T19:11:08.644 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-31T19:11:08.645 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-31T19:11:08.660 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-31T19:11:08.667 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-31T19:11:08.668 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-31T19:11:08.681 INFO:teuthology.orchestra.run.vm06.stdout:Selecting previously unselected package s3cmd. 2026-03-31T19:11:08.688 INFO:teuthology.orchestra.run.vm06.stdout:Preparing to unpack .../archives/s3cmd_2.2.0-1_all.deb ... 2026-03-31T19:11:08.688 INFO:teuthology.orchestra.run.vm06.stdout:Unpacking s3cmd (2.2.0-1) ... 2026-03-31T19:11:08.721 INFO:teuthology.orchestra.run.vm06.stdout:Setting up s3cmd (2.2.0-1) ... 2026-03-31T19:11:08.805 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-31T19:11:08.868 INFO:teuthology.orchestra.run.vm06.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-31T19:11:08.890 INFO:teuthology.orchestra.run.vm01.stdout:Get:87 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 20.2.0-721-g5bb32787-1jammy [97.5 kB] 2026-03-31T19:11:08.938 INFO:teuthology.orchestra.run.vm06.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-31T19:11:09.157 INFO:teuthology.orchestra.run.vm01.stdout:Fetched 281 MB in 42s (6717 kB/s) 2026-03-31T19:11:09.247 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:09.247 INFO:teuthology.orchestra.run.vm06.stdout:Running kernel seems to be up-to-date. 2026-03-31T19:11:09.247 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:09.247 INFO:teuthology.orchestra.run.vm06.stdout:Services to be restarted: 2026-03-31T19:11:09.249 INFO:teuthology.orchestra.run.vm06.stdout: systemctl restart apache-htcacheclean.service 2026-03-31T19:11:09.255 INFO:teuthology.orchestra.run.vm06.stdout: systemctl restart rsyslog.service 2026-03-31T19:11:09.257 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:09.257 INFO:teuthology.orchestra.run.vm06.stdout:Service restarts being deferred: 2026-03-31T19:11:09.257 INFO:teuthology.orchestra.run.vm06.stdout: systemctl restart networkd-dispatcher.service 2026-03-31T19:11:09.257 INFO:teuthology.orchestra.run.vm06.stdout: systemctl restart unattended-upgrades.service 2026-03-31T19:11:09.257 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:09.257 INFO:teuthology.orchestra.run.vm06.stdout:No containers need to be restarted. 2026-03-31T19:11:09.257 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:09.257 INFO:teuthology.orchestra.run.vm06.stdout:No user sessions are running outdated binaries. 2026-03-31T19:11:09.257 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:09.257 INFO:teuthology.orchestra.run.vm06.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-31T19:11:09.269 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-31T19:11:09.300 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 119262 files and directories currently installed.) 2026-03-31T19:11:09.302 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../00-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-31T19:11:09.304 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-31T19:11:09.323 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-31T19:11:09.329 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../01-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-31T19:11:09.330 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-31T19:11:09.345 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-31T19:11:09.350 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../02-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-31T19:11:09.351 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-31T19:11:09.371 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-31T19:11:09.376 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../03-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-31T19:11:09.380 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:11:09.417 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-31T19:11:09.422 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../04-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-31T19:11:09.423 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:11:09.440 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-31T19:11:09.446 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../05-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-31T19:11:09.446 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:11:09.469 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-31T19:11:09.474 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../06-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-31T19:11:09.475 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-31T19:11:09.498 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../07-librbd1_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:09.500 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librbd1 (20.2.0-721-g5bb32787-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-31T19:11:09.562 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../08-librados2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:09.564 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librados2 (20.2.0-721-g5bb32787-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-31T19:11:09.619 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libnbd0. 2026-03-31T19:11:09.625 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../09-libnbd0_1.10.5-1_amd64.deb ... 2026-03-31T19:11:09.626 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-31T19:11:09.640 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libcephfs2. 2026-03-31T19:11:09.645 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../10-libcephfs2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:09.646 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libcephfs2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:09.668 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rados. 2026-03-31T19:11:09.673 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../11-python3-rados_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:09.674 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rados (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:09.692 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-31T19:11:09.699 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../12-python3-ceph-argparse_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:11:09.699 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-ceph-argparse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:09.712 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cephfs. 2026-03-31T19:11:09.718 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../13-python3-cephfs_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:09.718 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cephfs (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:09.736 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-31T19:11:09.744 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../14-python3-ceph-common_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:11:09.744 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:09.768 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-31T19:11:09.774 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../15-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-31T19:11:09.775 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-31T19:11:09.792 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-prettytable. 2026-03-31T19:11:09.798 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../16-python3-prettytable_2.5.0-2_all.deb ... 2026-03-31T19:11:09.798 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-31T19:11:09.814 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rbd. 2026-03-31T19:11:09.819 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../17-python3-rbd_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:09.820 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rbd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:09.839 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-31T19:11:09.844 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../18-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-31T19:11:09.844 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-31T19:11:09.865 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package librgw2. 2026-03-31T19:11:09.870 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../19-librgw2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:09.870 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librgw2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:10.019 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rgw. 2026-03-31T19:11:10.025 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../20-python3-rgw_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:10.026 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:10.044 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-31T19:11:10.051 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../21-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-31T19:11:10.052 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-31T19:11:10.068 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libradosstriper1. 2026-03-31T19:11:10.074 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../22-libradosstriper1_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:10.075 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libradosstriper1 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:10.095 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-common. 2026-03-31T19:11:10.101 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../23-ceph-common_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:10.102 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:10.161 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T19:11:10.164 DEBUG:teuthology.parallel:result is None 2026-03-31T19:11:10.494 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-base. 2026-03-31T19:11:10.500 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../24-ceph-base_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:10.505 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-base (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:10.594 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-31T19:11:10.601 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../25-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-31T19:11:10.601 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-31T19:11:10.615 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cheroot. 2026-03-31T19:11:10.621 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../26-python3-cheroot_8.5.2+ds1-1ubuntu3.2_all.deb ... 2026-03-31T19:11:10.622 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.2) ... 2026-03-31T19:11:10.640 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-31T19:11:10.645 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../27-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-31T19:11:10.646 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-31T19:11:10.661 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-31T19:11:10.666 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../28-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-31T19:11:10.667 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-31T19:11:10.681 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-31T19:11:10.686 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../29-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-31T19:11:10.687 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-31T19:11:10.705 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-tempora. 2026-03-31T19:11:10.711 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../30-python3-tempora_4.1.2-1_all.deb ... 2026-03-31T19:11:10.711 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-31T19:11:10.727 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-portend. 2026-03-31T19:11:10.733 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../31-python3-portend_3.0.0-1_all.deb ... 2026-03-31T19:11:10.733 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-31T19:11:10.747 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-31T19:11:10.754 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../32-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-31T19:11:10.755 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-31T19:11:10.769 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-31T19:11:10.775 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../33-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-31T19:11:10.776 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-31T19:11:10.804 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-natsort. 2026-03-31T19:11:10.809 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../34-python3-natsort_8.0.2-1_all.deb ... 2026-03-31T19:11:10.810 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-31T19:11:10.826 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-31T19:11:10.831 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../35-ceph-mgr-modules-core_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:11:10.832 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-modules-core (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:10.864 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-31T19:11:10.870 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../36-libsqlite3-mod-ceph_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:10.871 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libsqlite3-mod-ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:10.888 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr. 2026-03-31T19:11:10.894 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../37-ceph-mgr_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:10.894 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:10.920 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mon. 2026-03-31T19:11:10.925 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../38-ceph-mon_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:10.926 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:11.011 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-31T19:11:11.016 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../39-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-31T19:11:11.017 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-31T19:11:11.035 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-osd. 2026-03-31T19:11:11.041 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../40-ceph-osd_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:11.042 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-osd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:11.259 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph. 2026-03-31T19:11:11.265 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../41-ceph_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:11.266 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:11.280 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-fuse. 2026-03-31T19:11:11.285 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../42-ceph-fuse_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:11.286 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:11.312 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mds. 2026-03-31T19:11:11.318 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../43-ceph-mds_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:11.319 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mds (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:11.358 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package cephadm. 2026-03-31T19:11:11.363 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../44-cephadm_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:11.364 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:11.383 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-31T19:11:11.389 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../45-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-31T19:11:11.390 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-31T19:11:11.415 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-31T19:11:11.421 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../46-ceph-mgr-cephadm_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:11:11.422 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:11.447 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-31T19:11:11.453 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../47-python3-repoze.lru_0.7-2_all.deb ... 2026-03-31T19:11:11.454 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-31T19:11:11.468 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-routes. 2026-03-31T19:11:11.474 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../48-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-31T19:11:11.475 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-31T19:11:11.498 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-31T19:11:11.503 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../49-ceph-mgr-dashboard_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:11:11.504 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-dashboard (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:12.077 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-31T19:11:12.084 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../50-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-31T19:11:12.085 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-31T19:11:12.139 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-joblib. 2026-03-31T19:11:12.144 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../51-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-31T19:11:12.145 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-31T19:11:12.178 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-31T19:11:12.184 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../52-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-31T19:11:12.185 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-31T19:11:12.200 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-sklearn. 2026-03-31T19:11:12.206 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../53-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-31T19:11:12.207 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-31T19:11:12.323 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-31T19:11:12.329 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../54-ceph-mgr-diskprediction-local_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:11:12.330 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-diskprediction-local (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:12.553 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cachetools. 2026-03-31T19:11:12.559 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../55-python3-cachetools_5.0.0-1_all.deb ... 2026-03-31T19:11:12.559 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-31T19:11:12.574 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rsa. 2026-03-31T19:11:12.581 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../56-python3-rsa_4.8-1_all.deb ... 2026-03-31T19:11:12.582 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-31T19:11:12.599 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-google-auth. 2026-03-31T19:11:12.605 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../57-python3-google-auth_1.5.1-3_all.deb ... 2026-03-31T19:11:12.605 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-31T19:11:12.622 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-31T19:11:12.627 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../58-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-31T19:11:12.628 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-31T19:11:12.643 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-websocket. 2026-03-31T19:11:12.649 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../59-python3-websocket_1.2.3-1_all.deb ... 2026-03-31T19:11:12.650 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-31T19:11:12.667 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-31T19:11:12.673 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../60-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-31T19:11:12.674 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-31T19:11:12.808 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-31T19:11:12.814 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../61-ceph-mgr-k8sevents_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:11:12.815 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-k8sevents (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:12.831 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-31T19:11:12.838 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../62-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-31T19:11:12.839 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-31T19:11:12.858 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-31T19:11:12.864 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../63-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-31T19:11:12.865 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-31T19:11:12.880 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package jq. 2026-03-31T19:11:12.887 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../64-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-31T19:11:12.888 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-31T19:11:12.907 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package socat. 2026-03-31T19:11:12.913 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../65-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-31T19:11:12.914 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-31T19:11:12.935 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package xmlstarlet. 2026-03-31T19:11:12.941 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../66-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-31T19:11:12.942 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-31T19:11:12.983 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-test. 2026-03-31T19:11:12.989 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../67-ceph-test_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:12.990 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-test (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:14.059 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-volume. 2026-03-31T19:11:14.064 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../68-ceph-volume_20.2.0-721-g5bb32787-1jammy_all.deb ... 2026-03-31T19:11:14.065 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-volume (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:14.090 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libcephfs-daemon. 2026-03-31T19:11:14.095 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../69-libcephfs-daemon_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:14.096 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libcephfs-daemon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:14.110 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libcephfs-proxy2. 2026-03-31T19:11:14.116 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../70-libcephfs-proxy2_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:14.117 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libcephfs-proxy2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:14.130 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-31T19:11:14.136 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../71-libcephfs-dev_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:14.137 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libcephfs-dev (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:14.153 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package nvme-cli. 2026-03-31T19:11:14.159 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../72-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-31T19:11:14.160 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-31T19:11:14.195 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-31T19:11:14.200 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../73-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-31T19:11:14.201 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-31T19:11:14.239 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-31T19:11:14.245 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../74-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-31T19:11:14.246 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-31T19:11:14.260 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pluggy. 2026-03-31T19:11:14.266 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../75-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-31T19:11:14.267 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-31T19:11:14.285 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-psutil. 2026-03-31T19:11:14.290 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../76-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-31T19:11:14.291 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-31T19:11:14.311 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-py. 2026-03-31T19:11:14.317 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../77-python3-py_1.10.0-1_all.deb ... 2026-03-31T19:11:14.317 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-31T19:11:14.341 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pygments. 2026-03-31T19:11:14.346 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../78-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-31T19:11:14.347 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-31T19:11:14.400 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-toml. 2026-03-31T19:11:14.406 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../79-python3-toml_0.10.2-1_all.deb ... 2026-03-31T19:11:14.407 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-31T19:11:14.421 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pytest. 2026-03-31T19:11:14.427 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../80-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-31T19:11:14.428 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-31T19:11:14.465 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-simplejson. 2026-03-31T19:11:14.470 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../81-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-31T19:11:14.471 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-31T19:11:14.489 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-webob. 2026-03-31T19:11:14.495 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../82-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-31T19:11:14.496 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-31T19:11:14.517 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-31T19:11:14.518 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../83-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-31T19:11:14.519 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-31T19:11:14.611 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package radosgw. 2026-03-31T19:11:14.617 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../84-radosgw_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:14.617 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking radosgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:14.935 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package rbd-fuse. 2026-03-31T19:11:14.941 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../85-rbd-fuse_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:11:14.942 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking rbd-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:14.961 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package smartmontools. 2026-03-31T19:11:14.966 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../86-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-31T19:11:14.974 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-31T19:11:15.014 INFO:teuthology.orchestra.run.vm01.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-31T19:11:15.263 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-31T19:11:15.263 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-31T19:11:15.620 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-31T19:11:15.681 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-31T19:11:15.683 INFO:teuthology.orchestra.run.vm01.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-31T19:11:15.744 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-31T19:11:15.971 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-31T19:11:16.350 INFO:teuthology.orchestra.run.vm01.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-31T19:11:16.369 INFO:teuthology.orchestra.run.vm01.stdout:Setting up cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:16.408 INFO:teuthology.orchestra.run.vm01.stdout:Adding system user cephadm....done 2026-03-31T19:11:16.417 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-31T19:11:16.478 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-31T19:11:16.481 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-31T19:11:16.543 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-31T19:11:16.609 INFO:teuthology.orchestra.run.vm01.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-31T19:11:16.611 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-31T19:11:16.697 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-31T19:11:16.817 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-31T19:11:16.886 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-31T19:11:16.950 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-ceph-argparse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:17.017 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-31T19:11:17.020 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-31T19:11:17.023 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-31T19:11:17.025 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-31T19:11:17.027 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-31T19:11:17.147 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-31T19:11:17.215 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libcephfs-proxy2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:17.217 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-31T19:11:17.285 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-31T19:11:17.361 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-31T19:11:17.616 INFO:teuthology.orchestra.run.vm01.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-31T19:11:17.618 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-31T19:11:17.709 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-31T19:11:17.843 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.2) ... 2026-03-31T19:11:17.928 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-31T19:11:17.999 INFO:teuthology.orchestra.run.vm01.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-31T19:11:18.001 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:18.092 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-31T19:11:18.613 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:11:18.618 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-31T19:11:18.686 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-31T19:11:18.689 INFO:teuthology.orchestra.run.vm01.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-31T19:11:18.691 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-31T19:11:18.763 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-31T19:11:18.826 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:11:18.828 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-31T19:11:18.902 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-31T19:11:18.972 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-31T19:11:19.040 INFO:teuthology.orchestra.run.vm01.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-31T19:11:19.043 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-31T19:11:19.118 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-31T19:11:19.121 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-31T19:11:19.187 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-31T19:11:19.274 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-31T19:11:19.340 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-31T19:11:19.343 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-31T19:11:19.471 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-31T19:11:19.532 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-31T19:11:19.535 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-31T19:11:19.605 INFO:teuthology.orchestra.run.vm01.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-31T19:11:19.608 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-31T19:11:19.740 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-31T19:11:19.742 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librados2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:19.745 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librgw2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:19.747 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libsqlite3-mod-ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:19.750 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-31T19:11:20.314 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libcephfs2 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:20.316 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libradosstriper1 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:20.318 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librbd1 (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:20.320 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-modules-core (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:20.323 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:20.380 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-31T19:11:20.380 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-31T19:11:20.742 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libcephfs-dev (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:20.744 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rados (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:20.747 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libcephfs-daemon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:20.749 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rbd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:20.751 INFO:teuthology.orchestra.run.vm01.stdout:Setting up rbd-fuse (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:20.753 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:20.755 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cephfs (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:20.757 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:20.789 INFO:teuthology.orchestra.run.vm01.stdout:Adding group ceph....done 2026-03-31T19:11:20.822 INFO:teuthology.orchestra.run.vm01.stdout:Adding system user ceph....done 2026-03-31T19:11:20.831 INFO:teuthology.orchestra.run.vm01.stdout:Setting system user ceph properties....done 2026-03-31T19:11:20.838 INFO:teuthology.orchestra.run.vm01.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-31T19:11:20.905 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-31T19:11:21.142 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-31T19:11:21.536 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-test (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:21.539 INFO:teuthology.orchestra.run.vm01.stdout:Setting up radosgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:21.778 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-31T19:11:21.778 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-31T19:11:22.158 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-base (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:22.246 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-31T19:11:22.598 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mds (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:22.659 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-31T19:11:22.659 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-31T19:11:23.028 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:23.101 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-31T19:11:23.101 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-31T19:11:23.462 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-osd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:23.535 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-31T19:11:23.535 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-31T19:11:23.893 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-k8sevents (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:23.895 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-diskprediction-local (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:23.908 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:23.968 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-31T19:11:23.969 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-31T19:11:24.320 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:24.332 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:24.334 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-dashboard (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:24.346 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-volume (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:11:24.465 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-31T19:11:24.541 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-31T19:11:24.829 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:24.829 INFO:teuthology.orchestra.run.vm01.stdout:Running kernel seems to be up-to-date. 2026-03-31T19:11:24.829 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:24.829 INFO:teuthology.orchestra.run.vm01.stdout:Services to be restarted: 2026-03-31T19:11:24.831 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart apache-htcacheclean.service 2026-03-31T19:11:24.836 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart rsyslog.service 2026-03-31T19:11:24.839 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:24.839 INFO:teuthology.orchestra.run.vm01.stdout:Service restarts being deferred: 2026-03-31T19:11:24.839 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart networkd-dispatcher.service 2026-03-31T19:11:24.839 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart unattended-upgrades.service 2026-03-31T19:11:24.839 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:24.839 INFO:teuthology.orchestra.run.vm01.stdout:No containers need to be restarted. 2026-03-31T19:11:24.839 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:24.839 INFO:teuthology.orchestra.run.vm01.stdout:No user sessions are running outdated binaries. 2026-03-31T19:11:24.839 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:24.839 INFO:teuthology.orchestra.run.vm01.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-31T19:11:25.674 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T19:11:25.676 DEBUG:teuthology.orchestra.run.vm01:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-jmespath python3-xmltodict s3cmd 2026-03-31T19:11:25.750 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-31T19:11:25.920 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-31T19:11:25.921 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-31T19:11:26.053 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T19:11:26.053 INFO:teuthology.orchestra.run.vm01.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T19:11:26.053 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T19:11:26.053 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T19:11:26.067 INFO:teuthology.orchestra.run.vm01.stdout:The following NEW packages will be installed: 2026-03-31T19:11:26.067 INFO:teuthology.orchestra.run.vm01.stdout: python3-jmespath python3-xmltodict s3cmd 2026-03-31T19:11:26.300 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 3 newly installed, 0 to remove and 50 not upgraded. 2026-03-31T19:11:26.300 INFO:teuthology.orchestra.run.vm01.stdout:Need to get 155 kB of archives. 2026-03-31T19:11:26.300 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 678 kB of additional disk space will be used. 2026-03-31T19:11:26.300 INFO:teuthology.orchestra.run.vm01.stdout:Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-31T19:11:26.531 INFO:teuthology.orchestra.run.vm01.stdout:Get:2 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-31T19:11:26.554 INFO:teuthology.orchestra.run.vm01.stdout:Get:3 http://archive.ubuntu.com/ubuntu jammy/universe amd64 s3cmd all 2.2.0-1 [120 kB] 2026-03-31T19:11:26.974 INFO:teuthology.orchestra.run.vm01.stdout:Fetched 155 kB in 1s (213 kB/s) 2026-03-31T19:11:27.018 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jmespath. 2026-03-31T19:11:27.052 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 126082 files and directories currently installed.) 2026-03-31T19:11:27.054 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-31T19:11:27.055 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-31T19:11:27.071 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-31T19:11:27.077 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-31T19:11:27.078 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-31T19:11:27.093 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package s3cmd. 2026-03-31T19:11:27.099 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../archives/s3cmd_2.2.0-1_all.deb ... 2026-03-31T19:11:27.100 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking s3cmd (2.2.0-1) ... 2026-03-31T19:11:27.130 INFO:teuthology.orchestra.run.vm01.stdout:Setting up s3cmd (2.2.0-1) ... 2026-03-31T19:11:27.218 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-31T19:11:27.281 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-31T19:11:27.350 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-31T19:11:27.651 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:27.651 INFO:teuthology.orchestra.run.vm01.stdout:Running kernel seems to be up-to-date. 2026-03-31T19:11:27.651 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:27.651 INFO:teuthology.orchestra.run.vm01.stdout:Services to be restarted: 2026-03-31T19:11:27.653 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart apache-htcacheclean.service 2026-03-31T19:11:27.658 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart rsyslog.service 2026-03-31T19:11:27.660 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:27.660 INFO:teuthology.orchestra.run.vm01.stdout:Service restarts being deferred: 2026-03-31T19:11:27.660 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart networkd-dispatcher.service 2026-03-31T19:11:27.660 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart unattended-upgrades.service 2026-03-31T19:11:27.660 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:27.660 INFO:teuthology.orchestra.run.vm01.stdout:No containers need to be restarted. 2026-03-31T19:11:27.660 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:27.660 INFO:teuthology.orchestra.run.vm01.stdout:No user sessions are running outdated binaries. 2026-03-31T19:11:27.660 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:27.660 INFO:teuthology.orchestra.run.vm01.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-31T19:11:28.501 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T19:11:28.503 DEBUG:teuthology.parallel:result is None 2026-03-31T19:11:28.503 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=5bb3278730741031382ca9c3dc9d221a942e06a2 2026-03-31T19:11:29.140 DEBUG:teuthology.orchestra.run.vm01:> dpkg-query -W -f '${Version}' ceph 2026-03-31T19:11:29.148 INFO:teuthology.orchestra.run.vm01.stdout:20.2.0-721-g5bb32787-1jammy 2026-03-31T19:11:29.148 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-721-g5bb32787-1jammy 2026-03-31T19:11:29.148 INFO:teuthology.task.install:The correct ceph version 20.2.0-721-g5bb32787-1jammy is installed. 2026-03-31T19:11:29.149 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=5bb3278730741031382ca9c3dc9d221a942e06a2 2026-03-31T19:11:29.781 DEBUG:teuthology.orchestra.run.vm03:> dpkg-query -W -f '${Version}' ceph 2026-03-31T19:11:29.790 INFO:teuthology.orchestra.run.vm03.stdout:20.2.0-721-g5bb32787-1jammy 2026-03-31T19:11:29.790 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-721-g5bb32787-1jammy 2026-03-31T19:11:29.790 INFO:teuthology.task.install:The correct ceph version 20.2.0-721-g5bb32787-1jammy is installed. 2026-03-31T19:11:29.790 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=5bb3278730741031382ca9c3dc9d221a942e06a2 2026-03-31T19:11:30.379 DEBUG:teuthology.orchestra.run.vm05:> dpkg-query -W -f '${Version}' ceph 2026-03-31T19:11:30.388 INFO:teuthology.orchestra.run.vm05.stdout:20.2.0-721-g5bb32787-1jammy 2026-03-31T19:11:30.389 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-721-g5bb32787-1jammy 2026-03-31T19:11:30.389 INFO:teuthology.task.install:The correct ceph version 20.2.0-721-g5bb32787-1jammy is installed. 2026-03-31T19:11:30.389 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=5bb3278730741031382ca9c3dc9d221a942e06a2 2026-03-31T19:11:30.979 DEBUG:teuthology.orchestra.run.vm06:> dpkg-query -W -f '${Version}' ceph 2026-03-31T19:11:30.989 INFO:teuthology.orchestra.run.vm06.stdout:20.2.0-721-g5bb32787-1jammy 2026-03-31T19:11:30.989 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-721-g5bb32787-1jammy 2026-03-31T19:11:30.989 INFO:teuthology.task.install:The correct ceph version 20.2.0-721-g5bb32787-1jammy is installed. 2026-03-31T19:11:30.989 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-31T19:11:30.989 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:11:30.989 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-31T19:11:30.998 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-31T19:11:30.998 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-31T19:11:31.007 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-31T19:11:31.007 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-31T19:11:31.015 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-31T19:11:31.015 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-31T19:11:31.038 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-31T19:11:31.038 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:11:31.038 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/daemon-helper 2026-03-31T19:11:31.053 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-31T19:11:31.101 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-31T19:11:31.101 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/daemon-helper 2026-03-31T19:11:31.108 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-31T19:11:31.162 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-31T19:11:31.162 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/daemon-helper 2026-03-31T19:11:31.169 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-31T19:11:31.223 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-31T19:11:31.223 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/daemon-helper 2026-03-31T19:11:31.231 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-31T19:11:31.281 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-31T19:11:31.281 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:11:31.281 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-31T19:11:31.289 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-31T19:11:31.341 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-31T19:11:31.341 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-31T19:11:31.349 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-31T19:11:31.397 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-31T19:11:31.397 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-31T19:11:31.404 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-31T19:11:31.454 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-31T19:11:31.454 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-31T19:11:31.462 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-31T19:11:31.512 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-31T19:11:31.513 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:11:31.513 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/stdin-killer 2026-03-31T19:11:31.521 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-31T19:11:31.569 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-31T19:11:31.569 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/stdin-killer 2026-03-31T19:11:31.577 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-31T19:11:31.625 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-31T19:11:31.625 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/stdin-killer 2026-03-31T19:11:31.632 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-31T19:11:31.682 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-31T19:11:31.682 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/usr/bin/stdin-killer 2026-03-31T19:11:31.689 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-31T19:11:31.737 INFO:teuthology.run_tasks:Running task ceph... 2026-03-31T19:11:31.773 INFO:tasks.ceph:Making ceph log dir writeable by non-root... 2026-03-31T19:11:31.774 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 777 /var/log/ceph 2026-03-31T19:11:31.775 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 777 /var/log/ceph 2026-03-31T19:11:31.776 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 777 /var/log/ceph 2026-03-31T19:11:31.776 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod 777 /var/log/ceph 2026-03-31T19:11:31.784 INFO:tasks.ceph:Disabling ceph logrotate... 2026-03-31T19:11:31.785 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-31T19:11:31.830 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-31T19:11:31.831 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-31T19:11:31.832 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-31T19:11:31.839 INFO:tasks.ceph:Creating extra log directories... 2026-03-31T19:11:31.839 DEBUG:teuthology.orchestra.run.vm01:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-31T19:11:31.878 DEBUG:teuthology.orchestra.run.vm03:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-31T19:11:31.883 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-31T19:11:31.884 DEBUG:teuthology.orchestra.run.vm06:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-31T19:11:31.895 INFO:tasks.ceph:Creating ceph cluster ceph... 2026-03-31T19:11:31.895 INFO:tasks.ceph:config {'conf': {'global': {'mon client directed command retry': 5, 'mon election default strategy': 1, 'ms bind msgr2': False, 'ms inject socket failures': 5000, 'ms type': 'async'}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20, 'mon scrub interval': 300}, 'osd': {'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'avl', 'bluestore block size': 96636764160, 'bluestore debug extent map encode check': True, 'bluestore fsck on mount': True, 'bluestore onode segment size': '512K', 'bluestore write v2': False, 'bluestore write v2 random': True, 'bluestore zero block detection': True, 'bluestore_elastic_shared_blobs': True, 'debug bluefs': 20, 'debug bluestore': 20, 'debug ms': 1, 'debug osd': 5, 'debug rocksdb': 10, 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd debug verify cached snaps': True, 'osd debug verify missing on start': True, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd mclock override recovery settings': True, 'osd mclock profile': 'high_recovery_ops', 'osd mclock skip benchmark': True, 'osd memory target': 939524096, 'osd objectstore': 'bluestore', 'osd op queue': 'debug_random', 'osd op queue cut off': 'debug_random'}}, 'fs': 'xfs', 'mkfs_options': None, 'mount_options': None, 'skip_mgr_daemons': False, 'log_ignorelist': ['\\(POOL_APP_NOT_ENABLED\\)', '\\(OSDMAP_FLAGS\\)', '\\(OSD_', '\\(OBJECT_', '\\(PG_', '\\(SLOW_OPS\\)', 'overall HEALTH', 'slow request', '\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', '\\(OSD_SLOW_PING_TIME', '\\(MON_DOWN\\)'], 'cpu_profile': set(), 'cluster': 'ceph', 'mon_bind_msgr2': False, 'mon_bind_addrvec': True} 2026-03-31T19:11:31.895 INFO:tasks.ceph:ctx.config {'archive_path': '/archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340', 'branch': 'tentacle', 'description': 'rados/singleton/{all/ec-esb-fio mon_election/classic msgr-failures/few msgr/async-v1only objectstore/{bluestore/{alloc$/{avl} base mem$/{normal-1} onode-segment$/{512K} write$/{random/{compr$/{no$/{no}} random}}}} rados supported-random-distro$/{ubuntu_latest}}', 'email': None, 'first_in_suite': False, 'flavor': 'default', 'job_id': '4340', 'ktype': 'distro', 'last_in_suite': False, 'machine_type': 'vps', 'meta': [{'desc': 'all/ec-esb-fio'}], 'name': 'kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps', 'no_nested_subset': False, 'openstack': [{'volumes': {'count': 6, 'size': 20}}], 'os_type': 'ubuntu', 'os_version': '22.04', 'overrides': {'admin_socket': {'branch': 'tentacle'}, 'ansible.cephlab': {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}}, 'ceph': {'conf': {'global': {'mon client directed command retry': 5, 'mon election default strategy': 1, 'ms bind msgr2': False, 'ms inject socket failures': 5000, 'ms type': 'async'}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20, 'mon scrub interval': 300}, 'osd': {'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'avl', 'bluestore block size': 96636764160, 'bluestore debug extent map encode check': True, 'bluestore fsck on mount': True, 'bluestore onode segment size': '512K', 'bluestore write v2': False, 'bluestore write v2 random': True, 'bluestore zero block detection': True, 'bluestore_elastic_shared_blobs': True, 'debug bluefs': 20, 'debug bluestore': 20, 'debug ms': 1, 'debug osd': 5, 'debug rocksdb': 10, 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd debug verify cached snaps': True, 'osd debug verify missing on start': True, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd mclock override recovery settings': True, 'osd mclock profile': 'high_recovery_ops', 'osd mclock skip benchmark': True, 'osd memory target': 939524096, 'osd objectstore': 'bluestore', 'osd op queue': 'debug_random', 'osd op queue cut off': 'debug_random'}}, 'flavor': 'default', 'fs': 'xfs', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', '\\(OSD_SLOW_PING_TIME', '\\(MON_DOWN\\)'], 'mon_bind_msgr2': False, 'sha1': '5bb3278730741031382ca9c3dc9d221a942e06a2'}, 'ceph-deploy': {'conf': {'client': {'log file': '/var/log/ceph/ceph-$name.$pid.log'}, 'mon': {}}}, 'cephadm': {'cephadm_binary_url': 'https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm'}, 'install': {'ceph': {'flavor': 'default', 'sha1': '5bb3278730741031382ca9c3dc9d221a942e06a2'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}}, 'thrashosds': {'bdev_inject_crash': 2, 'bdev_inject_crash_probability': 0.5}, 'workunit': {'branch': 'tt-tentacle', 'sha1': '0392f78529848ec72469e8e431875cb98d3a5fb4'}}, 'owner': 'kyr', 'priority': 1000, 'repo': 'https://github.com/ceph/ceph.git', 'roles': [['mon.a', 'mgr.x', 'client.0'], ['osd.0', 'osd.1'], ['osd.2', 'osd.3'], ['osd.4', 'osd.5']], 'seed': 6407, 'sha1': '5bb3278730741031382ca9c3dc9d221a942e06a2', 'sleep_before_teardown': 0, 'subset': '1/100000', 'suite': 'rados', 'suite_branch': 'tt-tentacle', 'suite_path': '/home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa', 'suite_relpath': 'qa', 'suite_repo': 'https://github.com/kshtsk/ceph.git', 'suite_sha1': '0392f78529848ec72469e8e431875cb98d3a5fb4', 'targets': {'vm01.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLd69V00xYC2CMtkKaj3kAPlLI99FmnqsYl0RoH4t9jdwc9wliMTIlX+q+JRc9A8cvWVYXXkUC885ro/3uByaFw=', 'vm03.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBjJZoFckQT4dMqZz/UV7jOh0mm6AYkzTJa/zbNkN6aRKmLm7fj3mGn+TBrHKZKJjLUE5Ywh4LcJZCjUtHwKxZ0=', 'vm05.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEr7pI7+qw3uyso88tkHOcY44shjJVyBxyGVHuDaDi1snWaUNYFW1Mw6qL6DCC197hl1o16I3jGW5Tn5sI38Di0=', 'vm06.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBObccJMZykEKQN0ju0OECDNla2291TGoFMM9toqCbCry/ymSnjIDSPLVXHJlRrjNZjahAORFCX4F3VNyDi3+Wps='}, 'tasks': [{'internal.check_packages': None}, {'internal.buildpackages_prep': None}, {'internal.save_config': None}, {'internal.check_lock': None}, {'internal.add_remotes': None}, {'console_log': None}, {'internal.connect': None}, {'internal.push_inventory': None}, {'internal.serialize_remote_roles': None}, {'internal.check_conflict': None}, {'internal.check_ceph_data': None}, {'internal.vm_setup': None}, {'internal.base': None}, {'internal.archive_upload': None}, {'internal.archive': None}, {'internal.coredump': None}, {'internal.sudo': None}, {'internal.syslog': None}, {'internal.timer': None}, {'pcp': None}, {'selinux': None}, {'ansible.cephlab': None}, {'clock': None}, {'install': None}, {'ceph': {'log-ignorelist': ['\\(POOL_APP_NOT_ENABLED\\)', '\\(OSDMAP_FLAGS\\)', '\\(OSD_', '\\(OBJECT_', '\\(PG_', '\\(SLOW_OPS\\)', 'overall HEALTH', 'slow request', '\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', '\\(OSD_SLOW_PING_TIME', '\\(MON_DOWN\\)'], 'conf': {'global': {'mon client directed command retry': 5, 'mon election default strategy': 1, 'ms bind msgr2': False, 'ms inject socket failures': 5000, 'ms type': 'async'}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20, 'mon scrub interval': 300}, 'osd': {'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'avl', 'bluestore block size': 96636764160, 'bluestore debug extent map encode check': True, 'bluestore fsck on mount': True, 'bluestore onode segment size': '512K', 'bluestore write v2': False, 'bluestore write v2 random': True, 'bluestore zero block detection': True, 'bluestore_elastic_shared_blobs': True, 'debug bluefs': 20, 'debug bluestore': 20, 'debug ms': 1, 'debug osd': 5, 'debug rocksdb': 10, 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd debug verify cached snaps': True, 'osd debug verify missing on start': True, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd mclock override recovery settings': True, 'osd mclock profile': 'high_recovery_ops', 'osd mclock skip benchmark': True, 'osd memory target': 939524096, 'osd objectstore': 'bluestore', 'osd op queue': 'debug_random', 'osd op queue cut off': 'debug_random'}}, 'flavor': 'default', 'fs': 'xfs', 'mon_bind_msgr2': False, 'sha1': '5bb3278730741031382ca9c3dc9d221a942e06a2', 'cluster': 'ceph'}}, {'workunit': {'clients': {'client.0': ['rados/ec-esb-fio.sh']}}}], 'teuthology': {'fragments_dropped': [], 'meta': {}, 'postmerge': []}, 'teuthology_branch': 'uv2', 'teuthology_repo': 'https://github.com/kshtsk/teuthology', 'teuthology_sha1': 'a59626679648f962bca99d20d35578f2998c8f37', 'timestamp': '2026-03-31_11:18:10', 'tube': 'vps', 'user': 'kyr', 'verbose': False, 'worker_log': '/home/teuthos/.teuthology/dispatcher/dispatcher.vps.282426'} 2026-03-31T19:11:31.895 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-31T19:11:31.926 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-31T19:11:31.935 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-31T19:11:31.936 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-31T19:11:31.940 DEBUG:teuthology.orchestra.run.vm01:> sudo install -d -m0777 -- /var/run/ceph 2026-03-31T19:11:31.970 DEBUG:teuthology.orchestra.run.vm03:> sudo install -d -m0777 -- /var/run/ceph 2026-03-31T19:11:31.979 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m0777 -- /var/run/ceph 2026-03-31T19:11:31.980 DEBUG:teuthology.orchestra.run.vm06:> sudo install -d -m0777 -- /var/run/ceph 2026-03-31T19:11:31.989 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-31T19:11:31.989 DEBUG:teuthology.orchestra.run.vm03:> dd if=/scratch_devs of=/dev/stdout 2026-03-31T19:11:32.032 DEBUG:teuthology.misc:devs=['/dev/vg_nvme/lv_1', '/dev/vg_nvme/lv_2', '/dev/vg_nvme/lv_3', '/dev/vg_nvme/lv_4'] 2026-03-31T19:11:32.032 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vg_nvme/lv_1 2026-03-31T19:11:32.076 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vg_nvme/lv_1 -> ../dm-0 2026-03-31T19:11:32.076 INFO:teuthology.orchestra.run.vm03.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-31T19:11:32.076 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 785 Links: 1 2026-03-31T19:11:32.076 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-31T19:11:32.076 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-31 19:09:41.558407000 +0000 2026-03-31T19:11:32.076 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-31 19:09:41.438407000 +0000 2026-03-31T19:11:32.076 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-31 19:09:41.438407000 +0000 2026-03-31T19:11:32.076 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-31T19:11:32.077 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vg_nvme/lv_1 of=/dev/null count=1 2026-03-31T19:11:32.124 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-31T19:11:32.124 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-31T19:11:32.124 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000186478 s, 2.7 MB/s 2026-03-31T19:11:32.125 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_1 2026-03-31T19:11:32.169 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vg_nvme/lv_2 2026-03-31T19:11:32.212 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vg_nvme/lv_2 -> ../dm-1 2026-03-31T19:11:32.212 INFO:teuthology.orchestra.run.vm03.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-31T19:11:32.213 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 817 Links: 1 2026-03-31T19:11:32.213 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-31T19:11:32.213 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-31 19:09:41.834407000 +0000 2026-03-31T19:11:32.213 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-31 19:09:41.710407000 +0000 2026-03-31T19:11:32.213 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-31 19:09:41.710407000 +0000 2026-03-31T19:11:32.213 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-31T19:11:32.213 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vg_nvme/lv_2 of=/dev/null count=1 2026-03-31T19:11:32.260 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-31T19:11:32.260 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-31T19:11:32.260 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000159749 s, 3.2 MB/s 2026-03-31T19:11:32.261 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_2 2026-03-31T19:11:32.305 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vg_nvme/lv_3 2026-03-31T19:11:32.348 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vg_nvme/lv_3 -> ../dm-2 2026-03-31T19:11:32.348 INFO:teuthology.orchestra.run.vm03.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-31T19:11:32.348 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 851 Links: 1 2026-03-31T19:11:32.348 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-31T19:11:32.348 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-31 19:09:42.134407000 +0000 2026-03-31T19:11:32.348 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-31 19:09:42.006407000 +0000 2026-03-31T19:11:32.348 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-31 19:09:42.006407000 +0000 2026-03-31T19:11:32.348 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-31T19:11:32.349 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vg_nvme/lv_3 of=/dev/null count=1 2026-03-31T19:11:32.396 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-31T19:11:32.396 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-31T19:11:32.396 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000179166 s, 2.9 MB/s 2026-03-31T19:11:32.397 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_3 2026-03-31T19:11:32.442 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vg_nvme/lv_4 2026-03-31T19:11:32.489 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vg_nvme/lv_4 -> ../dm-3 2026-03-31T19:11:32.489 INFO:teuthology.orchestra.run.vm03.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-31T19:11:32.489 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 884 Links: 1 2026-03-31T19:11:32.489 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-31T19:11:32.489 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-31 19:10:09.566407000 +0000 2026-03-31T19:11:32.489 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-31 19:09:42.290407000 +0000 2026-03-31T19:11:32.489 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-31 19:09:42.290407000 +0000 2026-03-31T19:11:32.489 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-31T19:11:32.489 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vg_nvme/lv_4 of=/dev/null count=1 2026-03-31T19:11:32.540 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-31T19:11:32.540 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-31T19:11:32.540 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000170739 s, 3.0 MB/s 2026-03-31T19:11:32.541 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_4 2026-03-31T19:11:32.585 INFO:tasks.ceph:osd dev map: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2'} 2026-03-31T19:11:32.585 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-31T19:11:32.585 DEBUG:teuthology.orchestra.run.vm05:> dd if=/scratch_devs of=/dev/stdout 2026-03-31T19:11:32.588 DEBUG:teuthology.misc:devs=['/dev/vg_nvme/lv_1', '/dev/vg_nvme/lv_2', '/dev/vg_nvme/lv_3', '/dev/vg_nvme/lv_4'] 2026-03-31T19:11:32.589 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vg_nvme/lv_1 2026-03-31T19:11:32.634 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vg_nvme/lv_1 -> ../dm-0 2026-03-31T19:11:32.634 INFO:teuthology.orchestra.run.vm05.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-31T19:11:32.634 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 759 Links: 1 2026-03-31T19:11:32.634 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-31T19:11:32.634 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-31 19:09:49.195088000 +0000 2026-03-31T19:11:32.634 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-31 19:09:49.067088000 +0000 2026-03-31T19:11:32.634 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-31 19:09:49.067088000 +0000 2026-03-31T19:11:32.634 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-31T19:11:32.634 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vg_nvme/lv_1 of=/dev/null count=1 2026-03-31T19:11:32.681 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-31T19:11:32.682 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-31T19:11:32.682 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.00016558 s, 3.1 MB/s 2026-03-31T19:11:32.682 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_1 2026-03-31T19:11:32.727 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vg_nvme/lv_2 2026-03-31T19:11:32.770 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vg_nvme/lv_2 -> ../dm-1 2026-03-31T19:11:32.770 INFO:teuthology.orchestra.run.vm05.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-31T19:11:32.770 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 792 Links: 1 2026-03-31T19:11:32.770 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-31T19:11:32.770 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-31 19:09:49.491088000 +0000 2026-03-31T19:11:32.770 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-31 19:09:49.351088000 +0000 2026-03-31T19:11:32.770 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-31 19:09:49.351088000 +0000 2026-03-31T19:11:32.770 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-31T19:11:32.770 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vg_nvme/lv_2 of=/dev/null count=1 2026-03-31T19:11:32.818 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-31T19:11:32.818 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-31T19:11:32.818 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.00016018 s, 3.2 MB/s 2026-03-31T19:11:32.818 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_2 2026-03-31T19:11:32.863 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vg_nvme/lv_3 2026-03-31T19:11:32.910 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vg_nvme/lv_3 -> ../dm-2 2026-03-31T19:11:32.910 INFO:teuthology.orchestra.run.vm05.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-31T19:11:32.910 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 824 Links: 1 2026-03-31T19:11:32.910 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-31T19:11:32.910 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-31 19:09:49.651088000 +0000 2026-03-31T19:11:32.910 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-31 19:09:49.647088000 +0000 2026-03-31T19:11:32.910 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-31 19:09:49.647088000 +0000 2026-03-31T19:11:32.910 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-31T19:11:32.910 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vg_nvme/lv_3 of=/dev/null count=1 2026-03-31T19:11:32.958 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-31T19:11:32.959 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-31T19:11:32.959 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000158727 s, 3.2 MB/s 2026-03-31T19:11:32.959 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_3 2026-03-31T19:11:33.007 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vg_nvme/lv_4 2026-03-31T19:11:33.054 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vg_nvme/lv_4 -> ../dm-3 2026-03-31T19:11:33.054 INFO:teuthology.orchestra.run.vm05.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-31T19:11:33.054 INFO:teuthology.orchestra.run.vm05.stdout:Device: 5h/5d Inode: 855 Links: 1 2026-03-31T19:11:33.054 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-31T19:11:33.054 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-31 19:10:09.579088000 +0000 2026-03-31T19:11:33.054 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-31 19:09:49.935088000 +0000 2026-03-31T19:11:33.054 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-31 19:09:49.935088000 +0000 2026-03-31T19:11:33.054 INFO:teuthology.orchestra.run.vm05.stdout: Birth: - 2026-03-31T19:11:33.054 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vg_nvme/lv_4 of=/dev/null count=1 2026-03-31T19:11:33.102 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-31T19:11:33.102 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-31T19:11:33.102 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000169527 s, 3.0 MB/s 2026-03-31T19:11:33.103 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_4 2026-03-31T19:11:33.151 INFO:tasks.ceph:osd dev map: {'osd.2': '/dev/vg_nvme/lv_1', 'osd.3': '/dev/vg_nvme/lv_2'} 2026-03-31T19:11:33.151 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-31T19:11:33.151 DEBUG:teuthology.orchestra.run.vm06:> dd if=/scratch_devs of=/dev/stdout 2026-03-31T19:11:33.155 DEBUG:teuthology.misc:devs=['/dev/vg_nvme/lv_1', '/dev/vg_nvme/lv_2', '/dev/vg_nvme/lv_3', '/dev/vg_nvme/lv_4'] 2026-03-31T19:11:33.155 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vg_nvme/lv_1 2026-03-31T19:11:33.201 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vg_nvme/lv_1 -> ../dm-0 2026-03-31T19:11:33.201 INFO:teuthology.orchestra.run.vm06.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-31T19:11:33.201 INFO:teuthology.orchestra.run.vm06.stdout:Device: 5h/5d Inode: 788 Links: 1 2026-03-31T19:11:33.201 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-31T19:11:33.201 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-31 19:09:41.441992000 +0000 2026-03-31T19:11:33.201 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-31 19:09:41.437992000 +0000 2026-03-31T19:11:33.201 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-31 19:09:41.437992000 +0000 2026-03-31T19:11:33.201 INFO:teuthology.orchestra.run.vm06.stdout: Birth: - 2026-03-31T19:11:33.201 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vg_nvme/lv_1 of=/dev/null count=1 2026-03-31T19:11:33.248 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-31T19:11:33.248 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-31T19:11:33.248 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000201076 s, 2.5 MB/s 2026-03-31T19:11:33.249 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_1 2026-03-31T19:11:33.293 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vg_nvme/lv_2 2026-03-31T19:11:33.340 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vg_nvme/lv_2 -> ../dm-1 2026-03-31T19:11:33.340 INFO:teuthology.orchestra.run.vm06.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-31T19:11:33.340 INFO:teuthology.orchestra.run.vm06.stdout:Device: 5h/5d Inode: 817 Links: 1 2026-03-31T19:11:33.340 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-31T19:11:33.340 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-31 19:09:41.741992000 +0000 2026-03-31T19:11:33.340 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-31 19:09:41.737992000 +0000 2026-03-31T19:11:33.341 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-31 19:09:41.737992000 +0000 2026-03-31T19:11:33.341 INFO:teuthology.orchestra.run.vm06.stdout: Birth: - 2026-03-31T19:11:33.341 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vg_nvme/lv_2 of=/dev/null count=1 2026-03-31T19:11:33.388 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-31T19:11:33.388 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-31T19:11:33.388 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000140282 s, 3.6 MB/s 2026-03-31T19:11:33.389 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_2 2026-03-31T19:11:33.433 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vg_nvme/lv_3 2026-03-31T19:11:33.476 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vg_nvme/lv_3 -> ../dm-2 2026-03-31T19:11:33.476 INFO:teuthology.orchestra.run.vm06.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-31T19:11:33.476 INFO:teuthology.orchestra.run.vm06.stdout:Device: 5h/5d Inode: 852 Links: 1 2026-03-31T19:11:33.476 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-31T19:11:33.476 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-31 19:09:42.165992000 +0000 2026-03-31T19:11:33.476 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-31 19:09:42.033992000 +0000 2026-03-31T19:11:33.476 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-31 19:09:42.033992000 +0000 2026-03-31T19:11:33.476 INFO:teuthology.orchestra.run.vm06.stdout: Birth: - 2026-03-31T19:11:33.476 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vg_nvme/lv_3 of=/dev/null count=1 2026-03-31T19:11:33.523 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-31T19:11:33.524 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-31T19:11:33.524 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000168826 s, 3.0 MB/s 2026-03-31T19:11:33.524 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_3 2026-03-31T19:11:33.569 DEBUG:teuthology.orchestra.run.vm06:> stat /dev/vg_nvme/lv_4 2026-03-31T19:11:33.612 INFO:teuthology.orchestra.run.vm06.stdout: File: /dev/vg_nvme/lv_4 -> ../dm-3 2026-03-31T19:11:33.612 INFO:teuthology.orchestra.run.vm06.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-31T19:11:33.612 INFO:teuthology.orchestra.run.vm06.stdout:Device: 5h/5d Inode: 882 Links: 1 2026-03-31T19:11:33.612 INFO:teuthology.orchestra.run.vm06.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-31T19:11:33.612 INFO:teuthology.orchestra.run.vm06.stdout:Access: 2026-03-31 19:10:09.597992000 +0000 2026-03-31T19:11:33.612 INFO:teuthology.orchestra.run.vm06.stdout:Modify: 2026-03-31 19:09:42.337992000 +0000 2026-03-31T19:11:33.612 INFO:teuthology.orchestra.run.vm06.stdout:Change: 2026-03-31 19:09:42.337992000 +0000 2026-03-31T19:11:33.612 INFO:teuthology.orchestra.run.vm06.stdout: Birth: - 2026-03-31T19:11:33.612 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/dev/vg_nvme/lv_4 of=/dev/null count=1 2026-03-31T19:11:33.660 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records in 2026-03-31T19:11:33.660 INFO:teuthology.orchestra.run.vm06.stderr:1+0 records out 2026-03-31T19:11:33.660 INFO:teuthology.orchestra.run.vm06.stderr:512 bytes copied, 0.000169978 s, 3.0 MB/s 2026-03-31T19:11:33.660 DEBUG:teuthology.orchestra.run.vm06:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_4 2026-03-31T19:11:33.705 INFO:tasks.ceph:osd dev map: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2'} 2026-03-31T19:11:33.705 INFO:tasks.ceph:remote_to_roles_to_devs: {Remote(name='ubuntu@vm03.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2'}, Remote(name='ubuntu@vm05.local'): {'osd.2': '/dev/vg_nvme/lv_1', 'osd.3': '/dev/vg_nvme/lv_2'}, Remote(name='ubuntu@vm06.local'): {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2'}} 2026-03-31T19:11:33.705 INFO:tasks.ceph:Generating config... 2026-03-31T19:11:33.706 INFO:tasks.ceph:[global] mon client directed command retry = 5 2026-03-31T19:11:33.706 INFO:tasks.ceph:[global] mon election default strategy = 1 2026-03-31T19:11:33.706 INFO:tasks.ceph:[global] ms bind msgr2 = False 2026-03-31T19:11:33.706 INFO:tasks.ceph:[global] ms inject socket failures = 5000 2026-03-31T19:11:33.706 INFO:tasks.ceph:[global] ms type = async 2026-03-31T19:11:33.706 INFO:tasks.ceph:[mgr] debug mgr = 20 2026-03-31T19:11:33.706 INFO:tasks.ceph:[mgr] debug ms = 1 2026-03-31T19:11:33.706 INFO:tasks.ceph:[mon] debug mon = 20 2026-03-31T19:11:33.706 INFO:tasks.ceph:[mon] debug ms = 1 2026-03-31T19:11:33.706 INFO:tasks.ceph:[mon] debug paxos = 20 2026-03-31T19:11:33.706 INFO:tasks.ceph:[mon] mon scrub interval = 300 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] bdev async discard = True 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] bdev enable discard = True 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] bluestore allocator = avl 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] bluestore block size = 96636764160 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] bluestore debug extent map encode check = True 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] bluestore fsck on mount = True 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] bluestore onode segment size = 512K 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] bluestore write v2 = False 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] bluestore write v2 random = True 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] bluestore zero block detection = True 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] bluestore_elastic_shared_blobs = True 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] debug bluefs = 20 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] debug bluestore = 20 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] debug ms = 1 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] debug osd = 5 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] debug rocksdb = 10 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] mon osd backfillfull_ratio = 0.85 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] mon osd full ratio = 0.9 2026-03-31T19:11:33.706 INFO:tasks.ceph:[osd] mon osd nearfull ratio = 0.8 2026-03-31T19:11:33.707 INFO:tasks.ceph:[osd] osd debug verify cached snaps = True 2026-03-31T19:11:33.707 INFO:tasks.ceph:[osd] osd debug verify missing on start = True 2026-03-31T19:11:33.707 INFO:tasks.ceph:[osd] osd failsafe full ratio = 0.95 2026-03-31T19:11:33.707 INFO:tasks.ceph:[osd] osd mclock iops capacity threshold hdd = 49000 2026-03-31T19:11:33.707 INFO:tasks.ceph:[osd] osd mclock override recovery settings = True 2026-03-31T19:11:33.707 INFO:tasks.ceph:[osd] osd mclock profile = high_recovery_ops 2026-03-31T19:11:33.707 INFO:tasks.ceph:[osd] osd mclock skip benchmark = True 2026-03-31T19:11:33.707 INFO:tasks.ceph:[osd] osd memory target = 939524096 2026-03-31T19:11:33.707 INFO:tasks.ceph:[osd] osd objectstore = bluestore 2026-03-31T19:11:33.707 INFO:tasks.ceph:[osd] osd op queue = debug_random 2026-03-31T19:11:33.707 INFO:tasks.ceph:[osd] osd op queue cut off = debug_random 2026-03-31T19:11:33.707 INFO:tasks.ceph:Setting up mon.a... 2026-03-31T19:11:33.707 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring /etc/ceph/ceph.keyring 2026-03-31T19:11:33.723 INFO:teuthology.orchestra.run.vm01.stdout:creating /etc/ceph/ceph.keyring 2026-03-31T19:11:33.726 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=mon. /etc/ceph/ceph.keyring 2026-03-31T19:11:33.791 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-31T19:11:33.841 DEBUG:tasks.ceph:Ceph mon addresses: [('mon.a', '[v1:192.168.123.101:6789]')] 2026-03-31T19:11:33.841 DEBUG:tasks.ceph:writing out conf {'global': {'chdir': '', 'pid file': '/var/run/ceph/$cluster-$name.pid', 'auth supported': 'cephx', 'filestore xattr use omap': 'true', 'mon clock drift allowed': '1.000', 'osd crush chooseleaf type': '0', 'auth debug': 'true', 'ms die on old message': 'true', 'ms die on bug': 'true', 'mon max pg per osd': '10000', 'mon pg warn max object skew': '0', 'osd_pool_default_pg_autoscale_mode': 'off', 'osd pool default size': '2', 'mon osd allow primary affinity': 'true', 'mon osd allow pg remap': 'true', 'mon warn on legacy crush tunables': 'false', 'mon warn on crush straw calc version zero': 'false', 'mon warn on no sortbitwise': 'false', 'mon warn on osd down out interval zero': 'false', 'mon warn on too few osds': 'false', 'mon_warn_on_pool_pg_num_not_power_of_two': 'false', 'mon_warn_on_pool_no_redundancy': 'false', 'mon_allow_pool_size_one': 'true', 'osd pool default erasure code profile': 'plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd', 'osd default data pool replay window': '5', 'mon allow pool delete': 'true', 'mon cluster log file level': 'debug', 'debug asserts on shutdown': 'true', 'mon health detail to clog': 'false', 'mon host': '[v1:192.168.123.101:6789]', 'mon client directed command retry': 5, 'mon election default strategy': 1, 'ms bind msgr2': False, 'ms inject socket failures': 5000, 'ms type': 'async'}, 'osd': {'osd journal size': '100', 'osd scrub load threshold': '5.0', 'osd scrub max interval': '600', 'osd mclock profile': 'high_recovery_ops', 'osd mclock skip benchmark': True, 'osd recover clone overlap': 'true', 'osd recovery max chunk': '1048576', 'osd debug shutdown': 'true', 'osd debug op order': 'true', 'osd debug verify stray on activate': 'true', 'osd debug trim objects': 'true', 'osd open classes on start': 'true', 'osd debug pg log writeout': 'true', 'osd deep scrub update digest min age': '30', 'osd map max advance': '10', 'journal zero on create': 'true', 'filestore ondisk finisher threads': '3', 'filestore apply finisher threads': '3', 'bdev debug aio': 'true', 'osd debug misdirected ops': 'true', 'bdev async discard': True, 'bdev enable discard': True, 'bluestore allocator': 'avl', 'bluestore block size': 96636764160, 'bluestore debug extent map encode check': True, 'bluestore fsck on mount': True, 'bluestore onode segment size': '512K', 'bluestore write v2': False, 'bluestore write v2 random': True, 'bluestore zero block detection': True, 'bluestore_elastic_shared_blobs': True, 'debug bluefs': 20, 'debug bluestore': 20, 'debug ms': 1, 'debug osd': 5, 'debug rocksdb': 10, 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd debug verify cached snaps': True, 'osd debug verify missing on start': True, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd mclock override recovery settings': True, 'osd memory target': 939524096, 'osd objectstore': 'bluestore', 'osd op queue': 'debug_random', 'osd op queue cut off': 'debug_random'}, 'mgr': {'debug ms': 1, 'debug mgr': 20, 'debug mon': '20', 'debug auth': '20', 'mon reweight min pgs per osd': '4', 'mon reweight min bytes per osd': '10', 'mgr/telemetry/nag': 'false'}, 'mon': {'debug ms': 1, 'debug mon': 20, 'debug paxos': 20, 'debug auth': '20', 'mon data avail warn': '5', 'mon mgr mkfs grace': '240', 'mon reweight min pgs per osd': '4', 'mon osd reporter subtree level': 'osd', 'mon osd prime pg temp': 'true', 'mon reweight min bytes per osd': '10', 'auth mon ticket ttl': '660', 'auth service ticket ttl': '240', 'mon_warn_on_insecure_global_id_reclaim': 'false', 'mon_warn_on_insecure_global_id_reclaim_allowed': 'false', 'mon_down_mkfs_grace': '2m', 'mon_warn_on_filestore_osds': 'false', 'mon scrub interval': 300}, 'client': {'rgw cache enabled': 'true', 'rgw enable ops log': 'true', 'rgw enable usage log': 'true', 'log file': '/var/log/ceph/$cluster-$name.$pid.log', 'admin socket': '/var/run/ceph/$cluster-$name.$pid.asok'}, 'mon.a': {}} 2026-03-31T19:11:33.841 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:11:33.842 DEBUG:teuthology.orchestra.run.vm01:> dd of=/home/ubuntu/cephtest/ceph.tmp.conf 2026-03-31T19:11:33.884 DEBUG:teuthology.orchestra.run.vm01:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage monmaptool -c /home/ubuntu/cephtest/ceph.tmp.conf --create --clobber --enable-all-features --addv a '[v1:192.168.123.101:6789]' --print /home/ubuntu/cephtest/ceph.monmap 2026-03-31T19:11:33.941 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool: monmap file /home/ubuntu/cephtest/ceph.monmap 2026-03-31T19:11:33.942 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool: generated fsid 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:11:33.942 INFO:teuthology.orchestra.run.vm01.stdout:setting min_mon_release = tentacle 2026-03-31T19:11:33.942 INFO:teuthology.orchestra.run.vm01.stdout:epoch 0 2026-03-31T19:11:33.942 INFO:teuthology.orchestra.run.vm01.stdout:fsid 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:11:33.942 INFO:teuthology.orchestra.run.vm01.stdout:last_changed 2026-03-31T19:11:33.941404+0000 2026-03-31T19:11:33.942 INFO:teuthology.orchestra.run.vm01.stdout:created 2026-03-31T19:11:33.941404+0000 2026-03-31T19:11:33.942 INFO:teuthology.orchestra.run.vm01.stdout:min_mon_release 20 (tentacle) 2026-03-31T19:11:33.942 INFO:teuthology.orchestra.run.vm01.stdout:election_strategy: 1 2026-03-31T19:11:33.942 INFO:teuthology.orchestra.run.vm01.stdout:0: v1:192.168.123.101:6789/0 mon.a 2026-03-31T19:11:33.942 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool: writing epoch 0 to /home/ubuntu/cephtest/ceph.monmap (1 monitors) 2026-03-31T19:11:33.943 DEBUG:teuthology.orchestra.run.vm01:> rm -- /home/ubuntu/cephtest/ceph.tmp.conf 2026-03-31T19:11:33.988 INFO:tasks.ceph:Writing /etc/ceph/ceph.conf for FSID 0c165360-d647-4244-b40a-096067707e1d... 2026-03-31T19:11:33.988 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-31T19:11:34.030 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-31T19:11:34.031 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-31T19:11:34.032 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-31T19:11:34.046 INFO:teuthology.orchestra.run.vm01.stdout:[global] 2026-03-31T19:11:34.046 INFO:teuthology.orchestra.run.vm01.stdout: chdir = "" 2026-03-31T19:11:34.046 INFO:teuthology.orchestra.run.vm01.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-31T19:11:34.046 INFO:teuthology.orchestra.run.vm01.stdout: auth supported = cephx 2026-03-31T19:11:34.046 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.046 INFO:teuthology.orchestra.run.vm01.stdout: filestore xattr use omap = true 2026-03-31T19:11:34.046 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.046 INFO:teuthology.orchestra.run.vm01.stdout: mon clock drift allowed = 1.000 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd crush chooseleaf type = 0 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: auth debug = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: ms die on old message = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: ms die on bug = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon pg warn max object skew = 0 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: # disable pg_autoscaler by default for new pools 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd pool default size = 2 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon osd allow primary affinity = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon osd allow pg remap = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on legacy crush tunables = false 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on crush straw calc version zero = false 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on no sortbitwise = false 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on osd down out interval zero = false 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on too few osds = false 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon_allow_pool_size_one = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd default data pool replay window = 5 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon allow pool delete = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon cluster log file level = debug 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: debug asserts on shutdown = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon health detail to clog = false 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon host = [v1:192.168.123.101:6789] 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon client directed command retry = 5 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: mon election default strategy = 1 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: ms bind msgr2 = False 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: ms inject socket failures = 5000 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: ms type = async 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: fsid = 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout:[osd] 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd journal size = 100 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd scrub load threshold = 5.0 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd scrub max interval = 600 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd mclock profile = high_recovery_ops 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd mclock skip benchmark = True 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd recover clone overlap = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd recovery max chunk = 1048576 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd debug shutdown = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd debug op order = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd debug verify stray on activate = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd debug trim objects = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd open classes on start = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: osd debug pg log writeout = true 2026-03-31T19:11:34.047 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: osd deep scrub update digest min age = 30 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: osd map max advance = 10 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: journal zero on create = true 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: filestore ondisk finisher threads = 3 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: filestore apply finisher threads = 3 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: bdev debug aio = true 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: osd debug misdirected ops = true 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: bdev async discard = True 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: bdev enable discard = True 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: bluestore allocator = avl 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: bluestore block size = 96636764160 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: bluestore debug extent map encode check = True 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: bluestore fsck on mount = True 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: bluestore onode segment size = 512K 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: bluestore write v2 = False 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: bluestore write v2 random = True 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: bluestore zero block detection = True 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: bluestore_elastic_shared_blobs = True 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: debug bluefs = 20 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: debug bluestore = 20 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: debug ms = 1 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: debug osd = 5 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: debug rocksdb = 10 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: mon osd backfillfull_ratio = 0.85 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: mon osd full ratio = 0.9 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: mon osd nearfull ratio = 0.8 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: osd debug verify cached snaps = True 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: osd debug verify missing on start = True 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: osd failsafe full ratio = 0.95 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: osd mclock override recovery settings = True 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: osd memory target = 939524096 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: osd objectstore = bluestore 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: osd op queue = debug_random 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: osd op queue cut off = debug_random 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout:[mgr] 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: debug ms = 1 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: debug mgr = 20 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: debug mon = 20 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: debug auth = 20 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: mon reweight min pgs per osd = 4 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: mon reweight min bytes per osd = 10 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: mgr/telemetry/nag = false 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout:[mon] 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: debug ms = 1 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: debug mon = 20 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: debug paxos = 20 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: debug auth = 20 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: mon data avail warn = 5 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: mon mgr mkfs grace = 240 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: mon reweight min pgs per osd = 4 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: mon osd reporter subtree level = osd 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: mon osd prime pg temp = true 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: mon reweight min bytes per osd = 10 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.048 INFO:teuthology.orchestra.run.vm01.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: auth mon ticket ttl = 660 # 11m 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: auth service ticket ttl = 240 # 4m 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: # don't complain about insecure global_id in the test suite 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: # 1m isn't quite enough 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: mon_down_mkfs_grace = 2m 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_filestore_osds = false 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: mon scrub interval = 300 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout:[client] 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: rgw cache enabled = true 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: rgw enable ops log = true 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: rgw enable usage log = true 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm01.stdout:[mon.a] 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout:[global] 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: chdir = "" 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: auth supported = cephx 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: filestore xattr use omap = true 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: mon clock drift allowed = 1.000 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: osd crush chooseleaf type = 0 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: auth debug = true 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: ms die on old message = true 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: ms die on bug = true 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: mon pg warn max object skew = 0 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: # disable pg_autoscaler by default for new pools 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: osd pool default size = 2 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: mon osd allow primary affinity = true 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: mon osd allow pg remap = true 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: mon warn on legacy crush tunables = false 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: mon warn on crush straw calc version zero = false 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: mon warn on no sortbitwise = false 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: mon warn on osd down out interval zero = false 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: mon warn on too few osds = false 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: mon_allow_pool_size_one = true 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.049 INFO:teuthology.orchestra.run.vm03.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd default data pool replay window = 5 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: mon allow pool delete = true 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: mon cluster log file level = debug 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: debug asserts on shutdown = true 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: mon health detail to clog = false 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: mon host = [v1:192.168.123.101:6789] 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: mon client directed command retry = 5 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: mon election default strategy = 1 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: ms bind msgr2 = False 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: ms inject socket failures = 5000 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: ms type = async 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: fsid = 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout:[osd] 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd journal size = 100 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd scrub load threshold = 5.0 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd scrub max interval = 600 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd mclock profile = high_recovery_ops 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd mclock skip benchmark = True 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd recover clone overlap = true 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd recovery max chunk = 1048576 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd debug shutdown = true 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd debug op order = true 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd debug verify stray on activate = true 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd debug trim objects = true 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd open classes on start = true 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd debug pg log writeout = true 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd deep scrub update digest min age = 30 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd map max advance = 10 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: journal zero on create = true 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: filestore ondisk finisher threads = 3 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: filestore apply finisher threads = 3 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: bdev debug aio = true 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: osd debug misdirected ops = true 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: bdev async discard = True 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: bdev enable discard = True 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: bluestore allocator = avl 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: bluestore block size = 96636764160 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: bluestore debug extent map encode check = True 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: bluestore fsck on mount = True 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: bluestore onode segment size = 512K 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: bluestore write v2 = False 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: bluestore write v2 random = True 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: bluestore zero block detection = True 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: bluestore_elastic_shared_blobs = True 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: debug bluefs = 20 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: debug bluestore = 20 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: debug ms = 1 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: debug osd = 5 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: debug rocksdb = 10 2026-03-31T19:11:34.050 INFO:teuthology.orchestra.run.vm03.stdout: mon osd backfillfull_ratio = 0.85 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon osd full ratio = 0.9 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon osd nearfull ratio = 0.8 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: osd debug verify cached snaps = True 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: osd debug verify missing on start = True 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: osd failsafe full ratio = 0.95 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: osd mclock override recovery settings = True 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: osd memory target = 939524096 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: osd objectstore = bluestore 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: osd op queue = debug_random 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: osd op queue cut off = debug_random 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout:[mgr] 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: debug ms = 1 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: debug mgr = 20 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: debug mon = 20 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: debug auth = 20 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon reweight min pgs per osd = 4 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon reweight min bytes per osd = 10 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mgr/telemetry/nag = false 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout:[mon] 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: debug ms = 1 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: debug mon = 20 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: debug paxos = 20 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: debug auth = 20 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon data avail warn = 5 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon mgr mkfs grace = 240 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon reweight min pgs per osd = 4 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon osd reporter subtree level = osd 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon osd prime pg temp = true 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon reweight min bytes per osd = 10 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: auth mon ticket ttl = 660 # 11m 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: auth service ticket ttl = 240 # 4m 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: # don't complain about insecure global_id in the test suite 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: # 1m isn't quite enough 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon_down_mkfs_grace = 2m 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon_warn_on_filestore_osds = false 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: mon scrub interval = 300 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout:[client] 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: rgw cache enabled = true 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: rgw enable ops log = true 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: rgw enable usage log = true 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-31T19:11:34.051 INFO:teuthology.orchestra.run.vm03.stdout:[mon.a] 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout:[global] 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: chdir = "" 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: auth supported = cephx 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: filestore xattr use omap = true 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon clock drift allowed = 1.000 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: osd crush chooseleaf type = 0 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: auth debug = true 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: ms die on old message = true 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: ms die on bug = true 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon pg warn max object skew = 0 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: # disable pg_autoscaler by default for new pools 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: osd pool default size = 2 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon osd allow primary affinity = true 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon osd allow pg remap = true 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon warn on legacy crush tunables = false 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon warn on crush straw calc version zero = false 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon warn on no sortbitwise = false 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon warn on osd down out interval zero = false 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon warn on too few osds = false 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon_allow_pool_size_one = true 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: osd default data pool replay window = 5 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon allow pool delete = true 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon cluster log file level = debug 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: debug asserts on shutdown = true 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon health detail to clog = false 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon host = [v1:192.168.123.101:6789] 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon client directed command retry = 5 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: mon election default strategy = 1 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: ms bind msgr2 = False 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: ms inject socket failures = 5000 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: ms type = async 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: fsid = 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout:[osd] 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: osd journal size = 100 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: osd scrub load threshold = 5.0 2026-03-31T19:11:34.052 INFO:teuthology.orchestra.run.vm06.stdout: osd scrub max interval = 600 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd mclock profile = high_recovery_ops 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd mclock skip benchmark = True 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd recover clone overlap = true 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd recovery max chunk = 1048576 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd debug shutdown = true 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd debug op order = true 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd debug verify stray on activate = true 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd debug trim objects = true 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd open classes on start = true 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd debug pg log writeout = true 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd deep scrub update digest min age = 30 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd map max advance = 10 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: journal zero on create = true 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: filestore ondisk finisher threads = 3 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: filestore apply finisher threads = 3 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: bdev debug aio = true 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd debug misdirected ops = true 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: bdev async discard = True 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: bdev enable discard = True 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: bluestore allocator = avl 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: bluestore block size = 96636764160 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: bluestore debug extent map encode check = True 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: bluestore fsck on mount = True 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: bluestore onode segment size = 512K 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: bluestore write v2 = False 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: bluestore write v2 random = True 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: bluestore zero block detection = True 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: bluestore_elastic_shared_blobs = True 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: debug bluefs = 20 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: debug bluestore = 20 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: debug ms = 1 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: debug osd = 5 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: debug rocksdb = 10 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: mon osd backfillfull_ratio = 0.85 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: mon osd full ratio = 0.9 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: mon osd nearfull ratio = 0.8 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd debug verify cached snaps = True 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd debug verify missing on start = True 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd failsafe full ratio = 0.95 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd mclock override recovery settings = True 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd memory target = 939524096 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd objectstore = bluestore 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd op queue = debug_random 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: osd op queue cut off = debug_random 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout:[mgr] 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: debug ms = 1 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: debug mgr = 20 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: debug mon = 20 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: debug auth = 20 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: mon reweight min pgs per osd = 4 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: mon reweight min bytes per osd = 10 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: mgr/telemetry/nag = false 2026-03-31T19:11:34.053 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout:[mon] 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: debug ms = 1 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: debug mon = 20 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: debug paxos = 20 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: debug auth = 20 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: mon data avail warn = 5 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: mon mgr mkfs grace = 240 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: mon reweight min pgs per osd = 4 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: mon osd reporter subtree level = osd 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: mon osd prime pg temp = true 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: mon reweight min bytes per osd = 10 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: auth mon ticket ttl = 660 # 11m 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: auth service ticket ttl = 240 # 4m 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: # don't complain about insecure global_id in the test suite 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: # 1m isn't quite enough 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: mon_down_mkfs_grace = 2m 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: mon_warn_on_filestore_osds = false 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: mon scrub interval = 300 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout:[client] 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: rgw cache enabled = true 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: rgw enable ops log = true 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: rgw enable usage log = true 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm06.stdout:[mon.a] 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout:[global] 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: chdir = "" 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: auth supported = cephx 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: filestore xattr use omap = true 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: mon clock drift allowed = 1.000 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: osd crush chooseleaf type = 0 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: auth debug = true 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: ms die on old message = true 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: ms die on bug = true 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: mon pg warn max object skew = 0 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: # disable pg_autoscaler by default for new pools 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-31T19:11:34.054 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd pool default size = 2 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon osd allow primary affinity = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon osd allow pg remap = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on legacy crush tunables = false 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on crush straw calc version zero = false 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on no sortbitwise = false 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on osd down out interval zero = false 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on too few osds = false 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon_allow_pool_size_one = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd default data pool replay window = 5 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon allow pool delete = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon cluster log file level = debug 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: debug asserts on shutdown = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon health detail to clog = false 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon host = [v1:192.168.123.101:6789] 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon client directed command retry = 5 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: mon election default strategy = 1 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: ms bind msgr2 = False 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: ms inject socket failures = 5000 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: ms type = async 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: fsid = 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout:[osd] 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd journal size = 100 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd scrub load threshold = 5.0 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd scrub max interval = 600 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd mclock profile = high_recovery_ops 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd mclock skip benchmark = True 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd recover clone overlap = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd recovery max chunk = 1048576 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd debug shutdown = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd debug op order = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd debug verify stray on activate = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd debug trim objects = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd open classes on start = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd debug pg log writeout = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd deep scrub update digest min age = 30 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd map max advance = 10 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: journal zero on create = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: filestore ondisk finisher threads = 3 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: filestore apply finisher threads = 3 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: bdev debug aio = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: osd debug misdirected ops = true 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: bdev async discard = True 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: bdev enable discard = True 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: bluestore allocator = avl 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: bluestore block size = 96636764160 2026-03-31T19:11:34.055 INFO:teuthology.orchestra.run.vm05.stdout: bluestore debug extent map encode check = True 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: bluestore fsck on mount = True 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: bluestore onode segment size = 512K 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: bluestore write v2 = False 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: bluestore write v2 random = True 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: bluestore zero block detection = True 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: bluestore_elastic_shared_blobs = True 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: debug bluefs = 20 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: debug bluestore = 20 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: debug ms = 1 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: debug osd = 5 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: debug rocksdb = 10 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon osd backfillfull_ratio = 0.85 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon osd full ratio = 0.9 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon osd nearfull ratio = 0.8 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: osd debug verify cached snaps = True 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: osd debug verify missing on start = True 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: osd failsafe full ratio = 0.95 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: osd mclock override recovery settings = True 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: osd memory target = 939524096 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: osd objectstore = bluestore 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: osd op queue = debug_random 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: osd op queue cut off = debug_random 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout:[mgr] 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: debug ms = 1 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: debug mgr = 20 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: debug mon = 20 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: debug auth = 20 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon reweight min pgs per osd = 4 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon reweight min bytes per osd = 10 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mgr/telemetry/nag = false 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout:[mon] 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: debug ms = 1 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: debug mon = 20 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: debug paxos = 20 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: debug auth = 20 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon data avail warn = 5 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon mgr mkfs grace = 240 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon reweight min pgs per osd = 4 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon osd reporter subtree level = osd 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon osd prime pg temp = true 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon reweight min bytes per osd = 10 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: auth mon ticket ttl = 660 # 11m 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: auth service ticket ttl = 240 # 4m 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: # don't complain about insecure global_id in the test suite 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: # 1m isn't quite enough 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon_down_mkfs_grace = 2m 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_filestore_osds = false 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: mon scrub interval = 300 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout:[client] 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: rgw cache enabled = true 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: rgw enable ops log = true 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: rgw enable usage log = true 2026-03-31T19:11:34.056 INFO:teuthology.orchestra.run.vm05.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-31T19:11:34.057 INFO:teuthology.orchestra.run.vm05.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-31T19:11:34.057 INFO:teuthology.orchestra.run.vm05.stdout:[mon.a] 2026-03-31T19:11:34.057 INFO:tasks.ceph:Creating admin key on mon.a... 2026-03-31T19:11:34.057 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /etc/ceph/ceph.keyring 2026-03-31T19:11:34.120 INFO:tasks.ceph:Copying monmap to all nodes... 2026-03-31T19:11:34.120 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:11:34.120 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.keyring of=/dev/stdout 2026-03-31T19:11:34.164 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:11:34.164 DEBUG:teuthology.orchestra.run.vm01:> dd if=/home/ubuntu/cephtest/ceph.monmap of=/dev/stdout 2026-03-31T19:11:34.212 INFO:tasks.ceph:Sending monmap to node ubuntu@vm01.local 2026-03-31T19:11:34.212 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:11:34.212 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-31T19:11:34.212 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-31T19:11:34.265 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:11:34.265 DEBUG:teuthology.orchestra.run.vm01:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-31T19:11:34.308 INFO:tasks.ceph:Sending monmap to node ubuntu@vm03.local 2026-03-31T19:11:34.308 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-31T19:11:34.308 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-31T19:11:34.308 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-31T19:11:34.320 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-31T19:11:34.320 DEBUG:teuthology.orchestra.run.vm03:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-31T19:11:34.364 INFO:tasks.ceph:Sending monmap to node ubuntu@vm05.local 2026-03-31T19:11:34.364 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-31T19:11:34.364 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-31T19:11:34.364 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-31T19:11:34.377 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-31T19:11:34.377 DEBUG:teuthology.orchestra.run.vm05:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-31T19:11:34.425 INFO:tasks.ceph:Sending monmap to node ubuntu@vm06.local 2026-03-31T19:11:34.425 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-31T19:11:34.425 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-31T19:11:34.425 DEBUG:teuthology.orchestra.run.vm06:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-31T19:11:34.441 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-31T19:11:34.442 DEBUG:teuthology.orchestra.run.vm06:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-31T19:11:34.484 INFO:tasks.ceph:Setting up mon nodes... 2026-03-31T19:11:34.484 INFO:tasks.ceph:Setting up mgr nodes... 2026-03-31T19:11:34.484 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /var/lib/ceph/mgr/ceph-x && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=mgr.x /var/lib/ceph/mgr/ceph-x/keyring 2026-03-31T19:11:34.508 INFO:teuthology.orchestra.run.vm01.stdout:creating /var/lib/ceph/mgr/ceph-x/keyring 2026-03-31T19:11:34.510 INFO:tasks.ceph:Setting up mds nodes... 2026-03-31T19:11:34.510 INFO:tasks.ceph_client:Setting up client nodes... 2026-03-31T19:11:34.510 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=client.0 /etc/ceph/ceph.client.0.keyring && sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-31T19:11:34.570 INFO:teuthology.orchestra.run.vm01.stdout:creating /etc/ceph/ceph.client.0.keyring 2026-03-31T19:11:34.577 INFO:tasks.ceph:Running mkfs on osd nodes... 2026-03-31T19:11:34.578 INFO:tasks.ceph:ctx.disk_config.remote_to_roles_to_dev: {Remote(name='ubuntu@vm03.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2'}, Remote(name='ubuntu@vm05.local'): {'osd.2': '/dev/vg_nvme/lv_1', 'osd.3': '/dev/vg_nvme/lv_2'}, Remote(name='ubuntu@vm06.local'): {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2'}} 2026-03-31T19:11:34.578 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /var/lib/ceph/osd/ceph-0 2026-03-31T19:11:34.585 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2'} 2026-03-31T19:11:34.585 INFO:tasks.ceph:role: osd.0 2026-03-31T19:11:34.585 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_1 on ubuntu@vm03.local 2026-03-31T19:11:34.585 DEBUG:teuthology.orchestra.run.vm03:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_1 2026-03-31T19:11:34.635 INFO:teuthology.orchestra.run.vm03.stdout:meta-data=/dev/vg_nvme/lv_1 isize=2048 agcount=4, agsize=1310464 blks 2026-03-31T19:11:34.635 INFO:teuthology.orchestra.run.vm03.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-31T19:11:34.635 INFO:teuthology.orchestra.run.vm03.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-31T19:11:34.635 INFO:teuthology.orchestra.run.vm03.stdout: = reflink=1 bigtime=0 inobtcount=0 2026-03-31T19:11:34.635 INFO:teuthology.orchestra.run.vm03.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-31T19:11:34.635 INFO:teuthology.orchestra.run.vm03.stdout: = sunit=0 swidth=0 blks 2026-03-31T19:11:34.635 INFO:teuthology.orchestra.run.vm03.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-31T19:11:34.635 INFO:teuthology.orchestra.run.vm03.stdout:log =internal log bsize=4096 blocks=2560, version=2 2026-03-31T19:11:34.635 INFO:teuthology.orchestra.run.vm03.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-31T19:11:34.635 INFO:teuthology.orchestra.run.vm03.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-31T19:11:34.640 INFO:teuthology.orchestra.run.vm03.stdout:Discarding blocks...Done. 2026-03-31T19:11:34.642 INFO:tasks.ceph:mount /dev/vg_nvme/lv_1 on ubuntu@vm03.local -o noatime 2026-03-31T19:11:34.642 DEBUG:teuthology.orchestra.run.vm03:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_1 /var/lib/ceph/osd/ceph-0 2026-03-31T19:11:34.732 DEBUG:teuthology.orchestra.run.vm03:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-0 2026-03-31T19:11:34.740 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:11:34.740 INFO:teuthology.orchestra.run.vm03.stderr:sudo: /sbin/restorecon: command not found 2026-03-31T19:11:34.740 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /var/lib/ceph/osd/ceph-1 2026-03-31T19:11:34.790 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2'} 2026-03-31T19:11:34.790 INFO:tasks.ceph:role: osd.1 2026-03-31T19:11:34.790 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_2 on ubuntu@vm03.local 2026-03-31T19:11:34.790 DEBUG:teuthology.orchestra.run.vm03:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2 2026-03-31T19:11:34.842 INFO:teuthology.orchestra.run.vm03.stdout:meta-data=/dev/vg_nvme/lv_2 isize=2048 agcount=4, agsize=1310464 blks 2026-03-31T19:11:34.842 INFO:teuthology.orchestra.run.vm03.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-31T19:11:34.842 INFO:teuthology.orchestra.run.vm03.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-31T19:11:34.842 INFO:teuthology.orchestra.run.vm03.stdout: = reflink=1 bigtime=0 inobtcount=0 2026-03-31T19:11:34.842 INFO:teuthology.orchestra.run.vm03.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-31T19:11:34.842 INFO:teuthology.orchestra.run.vm03.stdout: = sunit=0 swidth=0 blks 2026-03-31T19:11:34.842 INFO:teuthology.orchestra.run.vm03.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-31T19:11:34.842 INFO:teuthology.orchestra.run.vm03.stdout:log =internal log bsize=4096 blocks=2560, version=2 2026-03-31T19:11:34.842 INFO:teuthology.orchestra.run.vm03.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-31T19:11:34.842 INFO:teuthology.orchestra.run.vm03.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-31T19:11:34.846 INFO:teuthology.orchestra.run.vm03.stdout:Discarding blocks...Done. 2026-03-31T19:11:34.848 INFO:tasks.ceph:mount /dev/vg_nvme/lv_2 on ubuntu@vm03.local -o noatime 2026-03-31T19:11:34.848 DEBUG:teuthology.orchestra.run.vm03:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_2 /var/lib/ceph/osd/ceph-1 2026-03-31T19:11:34.903 DEBUG:teuthology.orchestra.run.vm03:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-1 2026-03-31T19:11:34.952 INFO:teuthology.orchestra.run.vm03.stderr:sudo: /sbin/restorecon: command not found 2026-03-31T19:11:34.952 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:11:34.953 DEBUG:teuthology.orchestra.run.vm03:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-31T19:11:35.016 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-31T19:11:35.013+0000 7f081ed9ca40 -1 auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory 2026-03-31T19:11:35.016 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-31T19:11:35.013+0000 7f081ed9ca40 -1 created new key in keyring /var/lib/ceph/osd/ceph-0/keyring 2026-03-31T19:11:35.016 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-31T19:11:35.013+0000 7f081ed9ca40 -1 bdev(0x56242fb87800 /var/lib/ceph/osd/ceph-0/block) open stat got: (1) Operation not permitted 2026-03-31T19:11:35.016 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-31T19:11:35.013+0000 7f081ed9ca40 -1 bluestore(/var/lib/ceph/osd/ceph-0) _read_fsid unparsable uuid 2026-03-31T19:11:35.810 DEBUG:teuthology.orchestra.run.vm03:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-31T19:11:35.863 DEBUG:teuthology.orchestra.run.vm03:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 1 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-31T19:11:35.927 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-31T19:11:35.925+0000 7fd1d07c4a40 -1 auth: error reading file: /var/lib/ceph/osd/ceph-1/keyring: can't open /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2026-03-31T19:11:35.927 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-31T19:11:35.925+0000 7fd1d07c4a40 -1 created new key in keyring /var/lib/ceph/osd/ceph-1/keyring 2026-03-31T19:11:35.927 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-31T19:11:35.925+0000 7fd1d07c4a40 -1 bdev(0x557a71d37800 /var/lib/ceph/osd/ceph-1/block) open stat got: (1) Operation not permitted 2026-03-31T19:11:35.927 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-31T19:11:35.925+0000 7fd1d07c4a40 -1 bluestore(/var/lib/ceph/osd/ceph-1) _read_fsid unparsable uuid 2026-03-31T19:11:36.713 DEBUG:teuthology.orchestra.run.vm03:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-31T19:11:36.765 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /var/lib/ceph/osd/ceph-2 2026-03-31T19:11:36.774 INFO:tasks.ceph:roles_to_devs: {'osd.2': '/dev/vg_nvme/lv_1', 'osd.3': '/dev/vg_nvme/lv_2'} 2026-03-31T19:11:36.775 INFO:tasks.ceph:role: osd.2 2026-03-31T19:11:36.775 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_1 on ubuntu@vm05.local 2026-03-31T19:11:36.775 DEBUG:teuthology.orchestra.run.vm05:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_1 2026-03-31T19:11:36.824 INFO:teuthology.orchestra.run.vm05.stdout:meta-data=/dev/vg_nvme/lv_1 isize=2048 agcount=4, agsize=1310464 blks 2026-03-31T19:11:36.824 INFO:teuthology.orchestra.run.vm05.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-31T19:11:36.825 INFO:teuthology.orchestra.run.vm05.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-31T19:11:36.825 INFO:teuthology.orchestra.run.vm05.stdout: = reflink=1 bigtime=0 inobtcount=0 2026-03-31T19:11:36.825 INFO:teuthology.orchestra.run.vm05.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-31T19:11:36.825 INFO:teuthology.orchestra.run.vm05.stdout: = sunit=0 swidth=0 blks 2026-03-31T19:11:36.825 INFO:teuthology.orchestra.run.vm05.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-31T19:11:36.825 INFO:teuthology.orchestra.run.vm05.stdout:log =internal log bsize=4096 blocks=2560, version=2 2026-03-31T19:11:36.825 INFO:teuthology.orchestra.run.vm05.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-31T19:11:36.825 INFO:teuthology.orchestra.run.vm05.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-31T19:11:36.829 INFO:teuthology.orchestra.run.vm05.stdout:Discarding blocks...Done. 2026-03-31T19:11:36.830 INFO:tasks.ceph:mount /dev/vg_nvme/lv_1 on ubuntu@vm05.local -o noatime 2026-03-31T19:11:36.830 DEBUG:teuthology.orchestra.run.vm05:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_1 /var/lib/ceph/osd/ceph-2 2026-03-31T19:11:36.918 DEBUG:teuthology.orchestra.run.vm05:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-2 2026-03-31T19:11:36.924 INFO:teuthology.orchestra.run.vm05.stderr:sudo: /sbin/restorecon: command not found 2026-03-31T19:11:36.924 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:11:36.925 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /var/lib/ceph/osd/ceph-3 2026-03-31T19:11:36.974 INFO:tasks.ceph:roles_to_devs: {'osd.2': '/dev/vg_nvme/lv_1', 'osd.3': '/dev/vg_nvme/lv_2'} 2026-03-31T19:11:36.975 INFO:tasks.ceph:role: osd.3 2026-03-31T19:11:36.975 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_2 on ubuntu@vm05.local 2026-03-31T19:11:36.975 DEBUG:teuthology.orchestra.run.vm05:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2 2026-03-31T19:11:37.022 INFO:teuthology.orchestra.run.vm05.stdout:meta-data=/dev/vg_nvme/lv_2 isize=2048 agcount=4, agsize=1310464 blks 2026-03-31T19:11:37.023 INFO:teuthology.orchestra.run.vm05.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-31T19:11:37.023 INFO:teuthology.orchestra.run.vm05.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-31T19:11:37.023 INFO:teuthology.orchestra.run.vm05.stdout: = reflink=1 bigtime=0 inobtcount=0 2026-03-31T19:11:37.023 INFO:teuthology.orchestra.run.vm05.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-31T19:11:37.023 INFO:teuthology.orchestra.run.vm05.stdout: = sunit=0 swidth=0 blks 2026-03-31T19:11:37.023 INFO:teuthology.orchestra.run.vm05.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-31T19:11:37.023 INFO:teuthology.orchestra.run.vm05.stdout:log =internal log bsize=4096 blocks=2560, version=2 2026-03-31T19:11:37.023 INFO:teuthology.orchestra.run.vm05.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-31T19:11:37.023 INFO:teuthology.orchestra.run.vm05.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-31T19:11:37.027 INFO:teuthology.orchestra.run.vm05.stdout:Discarding blocks...Done. 2026-03-31T19:11:37.028 INFO:tasks.ceph:mount /dev/vg_nvme/lv_2 on ubuntu@vm05.local -o noatime 2026-03-31T19:11:37.028 DEBUG:teuthology.orchestra.run.vm05:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_2 /var/lib/ceph/osd/ceph-3 2026-03-31T19:11:37.084 DEBUG:teuthology.orchestra.run.vm05:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-3 2026-03-31T19:11:37.133 INFO:teuthology.orchestra.run.vm05.stderr:sudo: /sbin/restorecon: command not found 2026-03-31T19:11:37.134 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:11:37.134 DEBUG:teuthology.orchestra.run.vm05:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 2 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-31T19:11:37.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-31T19:11:37.195+0000 7fa4717fda40 -1 auth: error reading file: /var/lib/ceph/osd/ceph-2/keyring: can't open /var/lib/ceph/osd/ceph-2/keyring: (2) No such file or directory 2026-03-31T19:11:37.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-31T19:11:37.195+0000 7fa4717fda40 -1 created new key in keyring /var/lib/ceph/osd/ceph-2/keyring 2026-03-31T19:11:37.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-31T19:11:37.195+0000 7fa4717fda40 -1 bdev(0x564cbab95800 /var/lib/ceph/osd/ceph-2/block) open stat got: (1) Operation not permitted 2026-03-31T19:11:37.196 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-31T19:11:37.195+0000 7fa4717fda40 -1 bluestore(/var/lib/ceph/osd/ceph-2) _read_fsid unparsable uuid 2026-03-31T19:11:38.018 DEBUG:teuthology.orchestra.run.vm05:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-31T19:11:38.071 DEBUG:teuthology.orchestra.run.vm05:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 3 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-31T19:11:38.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-31T19:11:38.131+0000 7f0d5c8dea40 -1 auth: error reading file: /var/lib/ceph/osd/ceph-3/keyring: can't open /var/lib/ceph/osd/ceph-3/keyring: (2) No such file or directory 2026-03-31T19:11:38.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-31T19:11:38.131+0000 7f0d5c8dea40 -1 created new key in keyring /var/lib/ceph/osd/ceph-3/keyring 2026-03-31T19:11:38.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-31T19:11:38.131+0000 7f0d5c8dea40 -1 bdev(0x560166753800 /var/lib/ceph/osd/ceph-3/block) open stat got: (1) Operation not permitted 2026-03-31T19:11:38.132 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-31T19:11:38.131+0000 7f0d5c8dea40 -1 bluestore(/var/lib/ceph/osd/ceph-3) _read_fsid unparsable uuid 2026-03-31T19:11:38.971 DEBUG:teuthology.orchestra.run.vm05:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-31T19:11:39.023 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /var/lib/ceph/osd/ceph-4 2026-03-31T19:11:39.032 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2'} 2026-03-31T19:11:39.032 INFO:tasks.ceph:role: osd.4 2026-03-31T19:11:39.032 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_1 on ubuntu@vm06.local 2026-03-31T19:11:39.032 DEBUG:teuthology.orchestra.run.vm06:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_1 2026-03-31T19:11:39.081 INFO:teuthology.orchestra.run.vm06.stdout:meta-data=/dev/vg_nvme/lv_1 isize=2048 agcount=4, agsize=1310464 blks 2026-03-31T19:11:39.081 INFO:teuthology.orchestra.run.vm06.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-31T19:11:39.081 INFO:teuthology.orchestra.run.vm06.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-31T19:11:39.081 INFO:teuthology.orchestra.run.vm06.stdout: = reflink=1 bigtime=0 inobtcount=0 2026-03-31T19:11:39.081 INFO:teuthology.orchestra.run.vm06.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-31T19:11:39.082 INFO:teuthology.orchestra.run.vm06.stdout: = sunit=0 swidth=0 blks 2026-03-31T19:11:39.082 INFO:teuthology.orchestra.run.vm06.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-31T19:11:39.082 INFO:teuthology.orchestra.run.vm06.stdout:log =internal log bsize=4096 blocks=2560, version=2 2026-03-31T19:11:39.082 INFO:teuthology.orchestra.run.vm06.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-31T19:11:39.082 INFO:teuthology.orchestra.run.vm06.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-31T19:11:39.086 INFO:teuthology.orchestra.run.vm06.stdout:Discarding blocks...Done. 2026-03-31T19:11:39.087 INFO:tasks.ceph:mount /dev/vg_nvme/lv_1 on ubuntu@vm06.local -o noatime 2026-03-31T19:11:39.087 DEBUG:teuthology.orchestra.run.vm06:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_1 /var/lib/ceph/osd/ceph-4 2026-03-31T19:11:39.181 DEBUG:teuthology.orchestra.run.vm06:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-4 2026-03-31T19:11:39.228 INFO:teuthology.orchestra.run.vm06.stderr:sudo: /sbin/restorecon: command not found 2026-03-31T19:11:39.228 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:11:39.228 DEBUG:teuthology.orchestra.run.vm06:> sudo mkdir -p /var/lib/ceph/osd/ceph-5 2026-03-31T19:11:39.277 INFO:tasks.ceph:roles_to_devs: {'osd.4': '/dev/vg_nvme/lv_1', 'osd.5': '/dev/vg_nvme/lv_2'} 2026-03-31T19:11:39.278 INFO:tasks.ceph:role: osd.5 2026-03-31T19:11:39.278 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_2 on ubuntu@vm06.local 2026-03-31T19:11:39.278 DEBUG:teuthology.orchestra.run.vm06:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2 2026-03-31T19:11:39.325 INFO:teuthology.orchestra.run.vm06.stdout:meta-data=/dev/vg_nvme/lv_2 isize=2048 agcount=4, agsize=1310464 blks 2026-03-31T19:11:39.326 INFO:teuthology.orchestra.run.vm06.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-31T19:11:39.326 INFO:teuthology.orchestra.run.vm06.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-31T19:11:39.326 INFO:teuthology.orchestra.run.vm06.stdout: = reflink=1 bigtime=0 inobtcount=0 2026-03-31T19:11:39.326 INFO:teuthology.orchestra.run.vm06.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-31T19:11:39.326 INFO:teuthology.orchestra.run.vm06.stdout: = sunit=0 swidth=0 blks 2026-03-31T19:11:39.326 INFO:teuthology.orchestra.run.vm06.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-31T19:11:39.326 INFO:teuthology.orchestra.run.vm06.stdout:log =internal log bsize=4096 blocks=2560, version=2 2026-03-31T19:11:39.326 INFO:teuthology.orchestra.run.vm06.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-31T19:11:39.326 INFO:teuthology.orchestra.run.vm06.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-31T19:11:39.330 INFO:teuthology.orchestra.run.vm06.stdout:Discarding blocks...Done. 2026-03-31T19:11:39.332 INFO:tasks.ceph:mount /dev/vg_nvme/lv_2 on ubuntu@vm06.local -o noatime 2026-03-31T19:11:39.332 DEBUG:teuthology.orchestra.run.vm06:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_2 /var/lib/ceph/osd/ceph-5 2026-03-31T19:11:39.386 DEBUG:teuthology.orchestra.run.vm06:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-5 2026-03-31T19:11:39.436 INFO:teuthology.orchestra.run.vm06.stderr:sudo: /sbin/restorecon: command not found 2026-03-31T19:11:39.436 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:11:39.436 DEBUG:teuthology.orchestra.run.vm06:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 4 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-31T19:11:39.498 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-31T19:11:39.493+0000 7fd936485a40 -1 auth: error reading file: /var/lib/ceph/osd/ceph-4/keyring: can't open /var/lib/ceph/osd/ceph-4/keyring: (2) No such file or directory 2026-03-31T19:11:39.498 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-31T19:11:39.493+0000 7fd936485a40 -1 created new key in keyring /var/lib/ceph/osd/ceph-4/keyring 2026-03-31T19:11:39.499 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-31T19:11:39.493+0000 7fd936485a40 -1 bdev(0x562012659800 /var/lib/ceph/osd/ceph-4/block) open stat got: (1) Operation not permitted 2026-03-31T19:11:39.499 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-31T19:11:39.493+0000 7fd936485a40 -1 bluestore(/var/lib/ceph/osd/ceph-4) _read_fsid unparsable uuid 2026-03-31T19:11:40.297 DEBUG:teuthology.orchestra.run.vm06:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 2026-03-31T19:11:40.345 DEBUG:teuthology.orchestra.run.vm06:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 5 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-31T19:11:40.410 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-31T19:11:40.405+0000 7f45dfed5a40 -1 auth: error reading file: /var/lib/ceph/osd/ceph-5/keyring: can't open /var/lib/ceph/osd/ceph-5/keyring: (2) No such file or directory 2026-03-31T19:11:40.410 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-31T19:11:40.405+0000 7f45dfed5a40 -1 created new key in keyring /var/lib/ceph/osd/ceph-5/keyring 2026-03-31T19:11:40.410 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-31T19:11:40.405+0000 7f45dfed5a40 -1 bdev(0x55e372461800 /var/lib/ceph/osd/ceph-5/block) open stat got: (1) Operation not permitted 2026-03-31T19:11:40.410 INFO:teuthology.orchestra.run.vm06.stderr:2026-03-31T19:11:40.405+0000 7f45dfed5a40 -1 bluestore(/var/lib/ceph/osd/ceph-5) _read_fsid unparsable uuid 2026-03-31T19:11:41.225 DEBUG:teuthology.orchestra.run.vm06:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 2026-03-31T19:11:41.273 INFO:tasks.ceph:Reading keys from all nodes... 2026-03-31T19:11:41.273 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:11:41.273 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/mgr/ceph-x/keyring of=/dev/stdout 2026-03-31T19:11:41.281 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-31T19:11:41.281 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/osd/ceph-0/keyring of=/dev/stdout 2026-03-31T19:11:41.289 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-31T19:11:41.289 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/osd/ceph-1/keyring of=/dev/stdout 2026-03-31T19:11:41.337 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-31T19:11:41.337 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/osd/ceph-2/keyring of=/dev/stdout 2026-03-31T19:11:41.346 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-31T19:11:41.346 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/osd/ceph-3/keyring of=/dev/stdout 2026-03-31T19:11:41.395 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-31T19:11:41.395 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/osd/ceph-4/keyring of=/dev/stdout 2026-03-31T19:11:41.403 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-31T19:11:41.403 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/osd/ceph-5/keyring of=/dev/stdout 2026-03-31T19:11:41.453 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:11:41.453 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.client.0.keyring of=/dev/stdout 2026-03-31T19:11:41.457 INFO:tasks.ceph:Adding keys to all mons... 2026-03-31T19:11:41.457 DEBUG:teuthology.orchestra.run.vm01:> sudo tee -a /etc/ceph/ceph.keyring 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout:[mgr.x] 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout: key = AQBmHMxpFoZEHhAAxDeqf/GWaNXt5f3LCqSEyg== 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout:[osd.0] 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout: key = AQBnHMxpI9MmARAAN5EdniCQiRslmBSu8lUJ4Q== 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout:[osd.1] 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout: key = AQBnHMxpxsZ3NxAAS/W8PXW5ppbB+3mVeII87Q== 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout:[osd.2] 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout: key = AQBpHMxpPiSzCxAAh2HUAxgRbz+akwLVdOL5DQ== 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout:[osd.3] 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout: key = AQBqHMxpqxrgBxAAZ1SEe/UVo1JP1X1RHxDiIA== 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout:[osd.4] 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout: key = AQBrHMxpK9yGHRAARh7Mm4FLN4v5KycVr0pRoA== 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout:[osd.5] 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout: key = AQBsHMxpEpBDGBAAa+BY7UCn023gZoRP9Ju8UA== 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout:[client.0] 2026-03-31T19:11:41.504 INFO:teuthology.orchestra.run.vm01.stdout: key = AQBmHMxpRv/+IRAAryvv4iHutXuBHMnN71yE3g== 2026-03-31T19:11:41.505 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.x --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-31T19:11:41.567 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.0 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-31T19:11:41.632 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.1 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-31T19:11:41.696 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.2 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-31T19:11:41.766 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.3 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-31T19:11:41.836 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.4 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-31T19:11:41.900 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.5 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-31T19:11:41.963 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.0 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-31T19:11:42.028 INFO:tasks.ceph:Running mkfs on mon nodes... 2026-03-31T19:11:42.029 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /var/lib/ceph/mon/ceph-a 2026-03-31T19:11:42.076 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i a --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-03-31T19:11:42.156 DEBUG:teuthology.orchestra.run.vm01:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-a 2026-03-31T19:11:42.205 DEBUG:teuthology.orchestra.run.vm01:> rm -- /home/ubuntu/cephtest/ceph.monmap 2026-03-31T19:11:42.248 INFO:tasks.ceph:Starting mon daemons in cluster ceph... 2026-03-31T19:11:42.248 INFO:tasks.ceph.mon.a:Restarting daemon 2026-03-31T19:11:42.248 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i a 2026-03-31T19:11:42.290 INFO:tasks.ceph.mon.a:Started 2026-03-31T19:11:42.290 INFO:tasks.ceph:Starting mgr daemons in cluster ceph... 2026-03-31T19:11:42.290 INFO:tasks.ceph.mgr.x:Restarting daemon 2026-03-31T19:11:42.290 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i x 2026-03-31T19:11:42.291 INFO:tasks.ceph.mgr.x:Started 2026-03-31T19:11:42.291 DEBUG:tasks.ceph:set 0 configs 2026-03-31T19:11:42.291 DEBUG:teuthology.orchestra.run.vm01:> sudo ceph --cluster ceph config dump 2026-03-31T19:11:42.387 INFO:teuthology.orchestra.run.vm01.stdout:WHO MASK LEVEL OPTION VALUE RO 2026-03-31T19:11:42.398 INFO:tasks.ceph:Setting crush tunables to default 2026-03-31T19:11:42.398 DEBUG:teuthology.orchestra.run.vm01:> sudo ceph --cluster ceph osd crush tunables default 2026-03-31T19:11:42.500 INFO:teuthology.orchestra.run.vm01.stderr:adjusted tunables profile to default 2026-03-31T19:11:42.513 INFO:tasks.ceph:check_enable_crimson: False 2026-03-31T19:11:42.513 INFO:tasks.ceph:Starting osd daemons in cluster ceph... 2026-03-31T19:11:42.513 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-31T19:11:42.513 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/osd/ceph-0/fsid of=/dev/stdout 2026-03-31T19:11:42.522 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-31T19:11:42.522 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/osd/ceph-1/fsid of=/dev/stdout 2026-03-31T19:11:42.570 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-31T19:11:42.570 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/osd/ceph-2/fsid of=/dev/stdout 2026-03-31T19:11:42.578 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-31T19:11:42.578 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/osd/ceph-3/fsid of=/dev/stdout 2026-03-31T19:11:42.626 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-31T19:11:42.626 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/osd/ceph-4/fsid of=/dev/stdout 2026-03-31T19:11:42.635 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-31T19:11:42.635 DEBUG:teuthology.orchestra.run.vm06:> sudo dd if=/var/lib/ceph/osd/ceph-5/fsid of=/dev/stdout 2026-03-31T19:11:42.685 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new 8e42da84-9085-490f-bcb2-12a7601715cc 0 2026-03-31T19:11:42.835 INFO:teuthology.orchestra.run.vm06.stdout:0 2026-03-31T19:11:42.848 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new c3e0761c-df20-4f27-aa02-67cba4a367a0 1 2026-03-31T19:11:42.954 INFO:teuthology.orchestra.run.vm06.stdout:1 2026-03-31T19:11:42.966 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new 33de56d5-999a-4940-9d6b-bd0e44b33124 2 2026-03-31T19:11:43.071 INFO:teuthology.orchestra.run.vm06.stdout:2 2026-03-31T19:11:43.083 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new 9fdc304f-7cea-4331-b3c0-c11b69079ac4 3 2026-03-31T19:11:43.187 INFO:teuthology.orchestra.run.vm06.stdout:3 2026-03-31T19:11:43.199 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new ead5f212-78f7-4fbe-971c-f5e7aafbfd46 4 2026-03-31T19:11:43.308 INFO:teuthology.orchestra.run.vm06.stdout:4 2026-03-31T19:11:43.318 DEBUG:teuthology.orchestra.run.vm06:> sudo ceph --cluster ceph osd new 453e4d44-fda0-456d-bc9d-f74967039030 5 2026-03-31T19:11:43.428 INFO:teuthology.orchestra.run.vm06.stdout:5 2026-03-31T19:11:43.440 INFO:tasks.ceph.osd.0:Restarting daemon 2026-03-31T19:11:43.440 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0 2026-03-31T19:11:43.442 INFO:tasks.ceph.osd.0:Started 2026-03-31T19:11:43.442 INFO:tasks.ceph.osd.1:Restarting daemon 2026-03-31T19:11:43.442 DEBUG:teuthology.orchestra.run.vm03:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1 2026-03-31T19:11:43.443 INFO:tasks.ceph.osd.1:Started 2026-03-31T19:11:43.443 INFO:tasks.ceph.osd.2:Restarting daemon 2026-03-31T19:11:43.443 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2 2026-03-31T19:11:43.444 INFO:tasks.ceph.osd.2:Started 2026-03-31T19:11:43.444 INFO:tasks.ceph.osd.3:Restarting daemon 2026-03-31T19:11:43.444 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3 2026-03-31T19:11:43.444 INFO:tasks.ceph.osd.3:Started 2026-03-31T19:11:43.445 INFO:tasks.ceph.osd.4:Restarting daemon 2026-03-31T19:11:43.445 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 4 2026-03-31T19:11:43.445 INFO:tasks.ceph.osd.4:Started 2026-03-31T19:11:43.445 INFO:tasks.ceph.osd.5:Restarting daemon 2026-03-31T19:11:43.445 DEBUG:teuthology.orchestra.run.vm06:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 5 2026-03-31T19:11:43.446 INFO:tasks.ceph.osd.5:Started 2026-03-31T19:11:43.446 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-31T19:11:43.554 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:43.554 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":8,"fsid":"0c165360-d647-4244-b40a-096067707e1d","created":"2026-03-31T19:11:42.338985+0000","modified":"2026-03-31T19:11:43.426008+0000","last_up_change":"0.000000","last_in_change":"2026-03-31T19:11:43.426008+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"8e42da84-9085-490f-bcb2-12a7601715cc","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"c3e0761c-df20-4f27-aa02-67cba4a367a0","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"33de56d5-999a-4940-9d6b-bd0e44b33124","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":3,"uuid":"9fdc304f-7cea-4331-b3c0-c11b69079ac4","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":4,"uuid":"ead5f212-78f7-4fbe-971c-f5e7aafbfd46","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":5,"uuid":"453e4d44-fda0-456d-bc9d-f74967039030","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-31T19:11:43.566 INFO:tasks.ceph.mgr.x.vm01.stderr:/usr/lib/python3/dist-packages/scipy/__init__.py:67: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-31T19:11:43.566 INFO:tasks.ceph.mgr.x.vm01.stderr:Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-31T19:11:43.566 INFO:tasks.ceph.mgr.x.vm01.stderr: from numpy import show_config as show_numpy_config 2026-03-31T19:11:43.577 INFO:tasks.ceph.ceph_manager.ceph:[] 2026-03-31T19:11:43.577 INFO:tasks.ceph:Waiting for OSDs to come up 2026-03-31T19:11:43.735 INFO:tasks.ceph.osd.4.vm06.stderr:2026-03-31T19:11:43.729+0000 7f0ead135a40 -1 Falling back to public interface 2026-03-31T19:11:43.751 INFO:tasks.ceph.osd.1.vm03.stderr:2026-03-31T19:11:43.749+0000 7f42c74fea40 -1 Falling back to public interface 2026-03-31T19:11:43.784 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-31T19:11:43.783+0000 7ff167cf9a40 -1 Falling back to public interface 2026-03-31T19:11:43.796 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-31T19:11:43.795+0000 7f7ffdbd1a40 -1 Falling back to public interface 2026-03-31T19:11:43.967 INFO:tasks.ceph.osd.0.vm03.stderr:2026-03-31T19:11:43.965+0000 7fe44b150a40 -1 Falling back to public interface 2026-03-31T19:11:44.152 INFO:tasks.ceph.mgr.x.vm01.stderr:Failed to import NVMeoFClient and related components: cannot import name 'NVMeoFClient' from 'dashboard.services.nvmeof_client' (/usr/share/ceph/mgr/dashboard/services/nvmeof_client.py) 2026-03-31T19:11:44.180 DEBUG:teuthology.orchestra.run.vm01:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-03-31T19:11:44.255 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-31T19:11:44.251+0000 7ff167cf9a40 -1 osd.2 0 log_to_monitors true 2026-03-31T19:11:44.264 INFO:tasks.ceph.osd.1.vm03.stderr:2026-03-31T19:11:44.261+0000 7f42c74fea40 -1 osd.1 0 log_to_monitors true 2026-03-31T19:11:44.269 INFO:tasks.ceph.osd.4.vm06.stderr:2026-03-31T19:11:44.261+0000 7f0ead135a40 -1 osd.4 0 log_to_monitors true 2026-03-31T19:11:44.275 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-31T19:11:44.275+0000 7f7ffdbd1a40 -1 osd.3 0 log_to_monitors true 2026-03-31T19:11:44.283 INFO:teuthology.misc.health.vm01.stdout: 2026-03-31T19:11:44.283 INFO:teuthology.misc.health.vm01.stdout:{"epoch":8,"fsid":"0c165360-d647-4244-b40a-096067707e1d","created":"2026-03-31T19:11:42.338985+0000","modified":"2026-03-31T19:11:43.426008+0000","last_up_change":"0.000000","last_in_change":"2026-03-31T19:11:43.426008+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"8e42da84-9085-490f-bcb2-12a7601715cc","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"c3e0761c-df20-4f27-aa02-67cba4a367a0","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"33de56d5-999a-4940-9d6b-bd0e44b33124","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":3,"uuid":"9fdc304f-7cea-4331-b3c0-c11b69079ac4","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":4,"uuid":"ead5f212-78f7-4fbe-971c-f5e7aafbfd46","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":5,"uuid":"453e4d44-fda0-456d-bc9d-f74967039030","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-31T19:11:44.293 DEBUG:teuthology.misc:0 of 6 OSDs are up 2026-03-31T19:11:44.523 INFO:tasks.ceph.osd.0.vm03.stderr:2026-03-31T19:11:44.521+0000 7fe44b150a40 -1 osd.0 0 log_to_monitors true 2026-03-31T19:11:45.436 INFO:tasks.ceph.osd.4.vm06.stderr:2026-03-31T19:11:45.429+0000 7f0ea88bb640 -1 osd.4 0 waiting for initial osdmap 2026-03-31T19:11:45.436 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-31T19:11:45.435+0000 7f7ff9357640 -1 osd.3 0 waiting for initial osdmap 2026-03-31T19:11:45.436 INFO:tasks.ceph.osd.1.vm03.stderr:2026-03-31T19:11:45.433+0000 7f42c2c84640 -1 osd.1 0 waiting for initial osdmap 2026-03-31T19:11:45.437 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-31T19:11:45.435+0000 7ff16347f640 -1 osd.2 0 waiting for initial osdmap 2026-03-31T19:11:45.440 INFO:tasks.ceph.osd.1.vm03.stderr:2026-03-31T19:11:45.437+0000 7f42bea72640 -1 osd.1 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-31T19:11:45.440 INFO:tasks.ceph.osd.4.vm06.stderr:2026-03-31T19:11:45.433+0000 7f0ea46a9640 -1 osd.4 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-31T19:11:45.440 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-31T19:11:45.439+0000 7ff15ea6c640 -1 osd.2 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-31T19:11:45.440 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-31T19:11:45.439+0000 7f7ff4944640 -1 osd.3 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-31T19:11:45.608 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-31T19:11:45.604+0000 7fe2d4405640 -1 mgr.server handle_report got status from non-daemon mon.a 2026-03-31T19:11:46.439 INFO:tasks.ceph.osd.0.vm03.stderr:2026-03-31T19:11:46.437+0000 7fe4468d6640 -1 osd.0 0 waiting for initial osdmap 2026-03-31T19:11:46.443 INFO:tasks.ceph.osd.0.vm03.stderr:2026-03-31T19:11:46.441+0000 7fe441ec3640 -1 osd.0 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-31T19:11:46.746 INFO:tasks.ceph.osd.5.vm06.stderr:2026-03-31T19:11:46.741+0000 7f055b26aa40 -1 Falling back to public interface 2026-03-31T19:11:47.235 INFO:tasks.ceph.osd.5.vm06.stderr:2026-03-31T19:11:47.229+0000 7f055b26aa40 -1 osd.5 0 log_to_monitors true 2026-03-31T19:11:48.444 INFO:tasks.ceph.osd.5.vm06.stderr:2026-03-31T19:11:48.437+0000 7f05569f0640 -1 osd.5 0 waiting for initial osdmap 2026-03-31T19:11:48.448 INFO:tasks.ceph.osd.5.vm06.stderr:2026-03-31T19:11:48.441+0000 7f05527de640 -1 osd.5 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-31T19:11:50.897 DEBUG:teuthology.orchestra.run.vm01:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-03-31T19:11:51.051 INFO:teuthology.misc.health.vm01.stdout: 2026-03-31T19:11:51.051 INFO:teuthology.misc.health.vm01.stdout:{"epoch":15,"fsid":"0c165360-d647-4244-b40a-096067707e1d","created":"2026-03-31T19:11:42.338985+0000","modified":"2026-03-31T19:11:50.448447+0000","last_up_change":"2026-03-31T19:11:49.445731+0000","last_in_change":"2026-03-31T19:11:43.426008+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":7,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-31T19:11:46.614367+0000","flags":32769,"flags_names":"hashpspool,creating","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"13","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":6.059999942779541,"score_stable":6.059999942779541,"optimal_score":0.33000001311302185,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"8e42da84-9085-490f-bcb2-12a7601715cc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6804","nonce":1584285308}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6805","nonce":1584285308}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6807","nonce":1584285308}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6806","nonce":1584285308}]},"public_addr":"192.168.123.103:6804/1584285308","cluster_addr":"192.168.123.103:6805/1584285308","heartbeat_back_addr":"192.168.123.103:6807/1584285308","heartbeat_front_addr":"192.168.123.103:6806/1584285308","state":["exists","up"]},{"osd":1,"uuid":"c3e0761c-df20-4f27-aa02-67cba4a367a0","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6800","nonce":2764498957}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6801","nonce":2764498957}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6803","nonce":2764498957}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6802","nonce":2764498957}]},"public_addr":"192.168.123.103:6800/2764498957","cluster_addr":"192.168.123.103:6801/2764498957","heartbeat_back_addr":"192.168.123.103:6803/2764498957","heartbeat_front_addr":"192.168.123.103:6802/2764498957","state":["exists","up"]},{"osd":2,"uuid":"33de56d5-999a-4940-9d6b-bd0e44b33124","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6800","nonce":473978242}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6801","nonce":473978242}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6803","nonce":473978242}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6802","nonce":473978242}]},"public_addr":"192.168.123.105:6800/473978242","cluster_addr":"192.168.123.105:6801/473978242","heartbeat_back_addr":"192.168.123.105:6803/473978242","heartbeat_front_addr":"192.168.123.105:6802/473978242","state":["exists","up"]},{"osd":3,"uuid":"9fdc304f-7cea-4331-b3c0-c11b69079ac4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6804","nonce":457425259}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6805","nonce":457425259}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6807","nonce":457425259}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6806","nonce":457425259}]},"public_addr":"192.168.123.105:6804/457425259","cluster_addr":"192.168.123.105:6805/457425259","heartbeat_back_addr":"192.168.123.105:6807/457425259","heartbeat_front_addr":"192.168.123.105:6806/457425259","state":["exists","up"]},{"osd":4,"uuid":"ead5f212-78f7-4fbe-971c-f5e7aafbfd46","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":12,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6800","nonce":682239721}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6801","nonce":682239721}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6803","nonce":682239721}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6802","nonce":682239721}]},"public_addr":"192.168.123.106:6800/682239721","cluster_addr":"192.168.123.106:6801/682239721","heartbeat_back_addr":"192.168.123.106:6803/682239721","heartbeat_front_addr":"192.168.123.106:6802/682239721","state":["exists","up"]},{"osd":5,"uuid":"453e4d44-fda0-456d-bc9d-f74967039030","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":14,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6804","nonce":146747963}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6805","nonce":146747963}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6807","nonce":146747963}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6806","nonce":146747963}]},"public_addr":"192.168.123.106:6804/146747963","cluster_addr":"192.168.123.106:6805/146747963","heartbeat_back_addr":"192.168.123.106:6807/146747963","heartbeat_front_addr":"192.168.123.106:6806/146747963","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.219411+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.277587+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.236606+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:48.261855+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-31T19:11:51.062 DEBUG:teuthology.misc:6 of 6 OSDs are up 2026-03-31T19:11:51.063 INFO:tasks.ceph:Creating RBD pool 2026-03-31T19:11:51.063 DEBUG:teuthology.orchestra.run.vm01:> sudo ceph --cluster ceph osd pool create rbd 8 2026-03-31T19:11:51.458 INFO:teuthology.orchestra.run.vm01.stderr:pool 'rbd' created 2026-03-31T19:11:51.472 DEBUG:teuthology.orchestra.run.vm01:> rbd --cluster ceph pool init rbd 2026-03-31T19:11:54.478 INFO:tasks.ceph:Starting mds daemons in cluster ceph... 2026-03-31T19:11:54.479 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph config log 1 --format=json 2026-03-31T19:11:54.479 INFO:tasks.daemonwatchdog.daemon_watchdog:watchdog starting 2026-03-31T19:11:54.635 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:54.648 INFO:teuthology.orchestra.run.vm01.stdout:[{"version":1,"timestamp":"0.000000","name":"","changes":[]}] 2026-03-31T19:11:54.648 INFO:tasks.ceph_manager:config epoch is 1 2026-03-31T19:11:54.648 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-31T19:11:54.648 INFO:tasks.ceph.ceph_manager.ceph:waiting for mgr available 2026-03-31T19:11:54.648 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph mgr dump --format=json 2026-03-31T19:11:54.828 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:54.841 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":5,"flags":0,"active_gid":4102,"active_name":"x","active_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.101:6800","nonce":183664126}]},"active_addr":"192.168.123.101:6800/183664126","active_change":"2026-03-31T19:11:44.596671+0000","active_mgr_features":4541880224203014143,"available":true,"standbys":[],"modules":["iostat","nfs"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to, use commas to separate multiple","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"certificate_automated_rotation_enabled":{"name":"certificate_automated_rotation_enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"This flag controls whether cephadm automatically rotates certificates upon expiration.","long_desc":"","tags":[],"see_also":[]},"certificate_check_debug_mode":{"name":"certificate_check_debug_mode","type":"bool","level":"dev","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"FOR TESTING ONLY: This flag forces the certificate check instead of waiting for certificate_check_period.","long_desc":"","tags":[],"see_also":[]},"certificate_check_period":{"name":"certificate_check_period","type":"int","level":"advanced","flags":0,"default_value":"1","min":"0","max":"30","enum_allowed":[],"desc":"Specifies how often (in days) the certificate should be checked for validity.","long_desc":"","tags":[],"see_also":[]},"certificate_duration_days":{"name":"certificate_duration_days","type":"int","level":"advanced","flags":0,"default_value":"1095","min":"90","max":"3650","enum_allowed":[],"desc":"Specifies the duration of self certificates generated and signed by cephadm root CA","long_desc":"","tags":[],"see_also":[]},"certificate_renewal_threshold_days":{"name":"certificate_renewal_threshold_days","type":"int","level":"advanced","flags":0,"default_value":"30","min":"10","max":"90","enum_allowed":[],"desc":"Specifies the lead time in days to initiate certificate renewal before expiration.","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.28.1","min":"","max":"","enum_allowed":[],"desc":"Alertmanager container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"Elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:12.3.1","min":"","max":"","enum_allowed":[],"desc":"Grafana container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"Haproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_nginx":{"name":"container_image_nginx","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nginx:sclorg-nginx-126","min":"","max":"","enum_allowed":[],"desc":"Nginx container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.9.1","min":"","max":"","enum_allowed":[],"desc":"Node exporter container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.5","min":"","max":"","enum_allowed":[],"desc":"Nvmeof container image","long_desc":"","tags":[],"see_also":[]},"container_image_oauth2_proxy":{"name":"container_image_oauth2_proxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/oauth2-proxy/oauth2-proxy:v7.6.0","min":"","max":"","enum_allowed":[],"desc":"Oauth2 proxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v3.6.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba_metrics":{"name":"container_image_samba_metrics","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-metrics:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba metrics container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"Snmp gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"stray_daemon_check_interval":{"name":"stray_daemon_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"how frequently cephadm should check for the presence of stray daemons","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MANAGED_BY_CLUSTERS":{"name":"MANAGED_BY_CLUSTERS","type":"str","level":"advanced","flags":0,"default_value":"[]","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MULTICLUSTER_CONFIG":{"name":"MULTICLUSTER_CONFIG","type":"str","level":"advanced","flags":0,"default_value":"{}","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROM_ALERT_CREDENTIAL_CACHE_TTL":{"name":"PROM_ALERT_CREDENTIAL_CACHE_TTL","type":"int","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_HOSTNAME_PER_DAEMON":{"name":"RGW_HOSTNAME_PER_DAEMON","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crypto_caller":{"name":"crypto_caller","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sso_oauth2":{"name":"sso_oauth2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_cloning":{"name":"pause_cloning","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_purging":{"name":"pause_purging","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous subvolume purge threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"tentacle":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":0,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":263266474}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":1589460974}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":544470537}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":3311804667}]}]} 2026-03-31T19:11:54.842 INFO:tasks.ceph.ceph_manager.ceph:mgr available! 2026-03-31T19:11:54.842 INFO:tasks.ceph.ceph_manager.ceph:waiting for all up 2026-03-31T19:11:54.842 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-31T19:11:55.007 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:55.007 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":19,"fsid":"0c165360-d647-4244-b40a-096067707e1d","created":"2026-03-31T19:11:42.338985+0000","modified":"2026-03-31T19:11:54.468375+0000","last_up_change":"2026-03-31T19:11:49.445731+0000","last_in_change":"2026-03-31T19:11:43.426008+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":7,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-31T19:11:46.614367+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":6.059999942779541,"score_stable":6.059999942779541,"optimal_score":0.33000001311302185,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"rbd","create_time":"2026-03-31T19:11:51.217694+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":19,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.25,"score_stable":2.25,"optimal_score":1,"raw_score_acting":2.25,"raw_score_stable":2.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"8e42da84-9085-490f-bcb2-12a7601715cc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6804","nonce":1584285308}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6805","nonce":1584285308}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6807","nonce":1584285308}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6806","nonce":1584285308}]},"public_addr":"192.168.123.103:6804/1584285308","cluster_addr":"192.168.123.103:6805/1584285308","heartbeat_back_addr":"192.168.123.103:6807/1584285308","heartbeat_front_addr":"192.168.123.103:6806/1584285308","state":["exists","up"]},{"osd":1,"uuid":"c3e0761c-df20-4f27-aa02-67cba4a367a0","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":16,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6800","nonce":2764498957}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6801","nonce":2764498957}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6803","nonce":2764498957}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6802","nonce":2764498957}]},"public_addr":"192.168.123.103:6800/2764498957","cluster_addr":"192.168.123.103:6801/2764498957","heartbeat_back_addr":"192.168.123.103:6803/2764498957","heartbeat_front_addr":"192.168.123.103:6802/2764498957","state":["exists","up"]},{"osd":2,"uuid":"33de56d5-999a-4940-9d6b-bd0e44b33124","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6800","nonce":473978242}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6801","nonce":473978242}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6803","nonce":473978242}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6802","nonce":473978242}]},"public_addr":"192.168.123.105:6800/473978242","cluster_addr":"192.168.123.105:6801/473978242","heartbeat_back_addr":"192.168.123.105:6803/473978242","heartbeat_front_addr":"192.168.123.105:6802/473978242","state":["exists","up"]},{"osd":3,"uuid":"9fdc304f-7cea-4331-b3c0-c11b69079ac4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":16,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6804","nonce":457425259}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6805","nonce":457425259}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6807","nonce":457425259}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6806","nonce":457425259}]},"public_addr":"192.168.123.105:6804/457425259","cluster_addr":"192.168.123.105:6805/457425259","heartbeat_back_addr":"192.168.123.105:6807/457425259","heartbeat_front_addr":"192.168.123.105:6806/457425259","state":["exists","up"]},{"osd":4,"uuid":"ead5f212-78f7-4fbe-971c-f5e7aafbfd46","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":16,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6800","nonce":682239721}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6801","nonce":682239721}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6803","nonce":682239721}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6802","nonce":682239721}]},"public_addr":"192.168.123.106:6800/682239721","cluster_addr":"192.168.123.106:6801/682239721","heartbeat_back_addr":"192.168.123.106:6803/682239721","heartbeat_front_addr":"192.168.123.106:6802/682239721","state":["exists","up"]},{"osd":5,"uuid":"453e4d44-fda0-456d-bc9d-f74967039030","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":14,"up_thru":16,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6804","nonce":146747963}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6805","nonce":146747963}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6807","nonce":146747963}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6806","nonce":146747963}]},"public_addr":"192.168.123.106:6804/146747963","cluster_addr":"192.168.123.106:6805/146747963","heartbeat_back_addr":"192.168.123.106:6807/146747963","heartbeat_front_addr":"192.168.123.106:6806/146747963","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.219411+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.277587+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.236606+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:48.261855+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-31T19:11:55.019 INFO:tasks.ceph.ceph_manager.ceph:all up! 2026-03-31T19:11:55.019 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-31T19:11:55.179 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:55.179 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":19,"fsid":"0c165360-d647-4244-b40a-096067707e1d","created":"2026-03-31T19:11:42.338985+0000","modified":"2026-03-31T19:11:54.468375+0000","last_up_change":"2026-03-31T19:11:49.445731+0000","last_in_change":"2026-03-31T19:11:43.426008+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":7,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-31T19:11:46.614367+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":6.059999942779541,"score_stable":6.059999942779541,"optimal_score":0.33000001311302185,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"rbd","create_time":"2026-03-31T19:11:51.217694+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":19,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.25,"score_stable":2.25,"optimal_score":1,"raw_score_acting":2.25,"raw_score_stable":2.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"8e42da84-9085-490f-bcb2-12a7601715cc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6804","nonce":1584285308}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6805","nonce":1584285308}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6807","nonce":1584285308}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6806","nonce":1584285308}]},"public_addr":"192.168.123.103:6804/1584285308","cluster_addr":"192.168.123.103:6805/1584285308","heartbeat_back_addr":"192.168.123.103:6807/1584285308","heartbeat_front_addr":"192.168.123.103:6806/1584285308","state":["exists","up"]},{"osd":1,"uuid":"c3e0761c-df20-4f27-aa02-67cba4a367a0","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":16,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6800","nonce":2764498957}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6801","nonce":2764498957}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6803","nonce":2764498957}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6802","nonce":2764498957}]},"public_addr":"192.168.123.103:6800/2764498957","cluster_addr":"192.168.123.103:6801/2764498957","heartbeat_back_addr":"192.168.123.103:6803/2764498957","heartbeat_front_addr":"192.168.123.103:6802/2764498957","state":["exists","up"]},{"osd":2,"uuid":"33de56d5-999a-4940-9d6b-bd0e44b33124","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":14,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6800","nonce":473978242}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6801","nonce":473978242}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6803","nonce":473978242}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6802","nonce":473978242}]},"public_addr":"192.168.123.105:6800/473978242","cluster_addr":"192.168.123.105:6801/473978242","heartbeat_back_addr":"192.168.123.105:6803/473978242","heartbeat_front_addr":"192.168.123.105:6802/473978242","state":["exists","up"]},{"osd":3,"uuid":"9fdc304f-7cea-4331-b3c0-c11b69079ac4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":16,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6804","nonce":457425259}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6805","nonce":457425259}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6807","nonce":457425259}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6806","nonce":457425259}]},"public_addr":"192.168.123.105:6804/457425259","cluster_addr":"192.168.123.105:6805/457425259","heartbeat_back_addr":"192.168.123.105:6807/457425259","heartbeat_front_addr":"192.168.123.105:6806/457425259","state":["exists","up"]},{"osd":4,"uuid":"ead5f212-78f7-4fbe-971c-f5e7aafbfd46","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":16,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6800","nonce":682239721}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6801","nonce":682239721}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6803","nonce":682239721}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6802","nonce":682239721}]},"public_addr":"192.168.123.106:6800/682239721","cluster_addr":"192.168.123.106:6801/682239721","heartbeat_back_addr":"192.168.123.106:6803/682239721","heartbeat_front_addr":"192.168.123.106:6802/682239721","state":["exists","up"]},{"osd":5,"uuid":"453e4d44-fda0-456d-bc9d-f74967039030","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":14,"up_thru":16,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6804","nonce":146747963}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6805","nonce":146747963}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6807","nonce":146747963}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6806","nonce":146747963}]},"public_addr":"192.168.123.106:6804/146747963","cluster_addr":"192.168.123.106:6805/146747963","heartbeat_back_addr":"192.168.123.106:6807/146747963","heartbeat_front_addr":"192.168.123.106:6806/146747963","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.219411+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.277587+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.236606+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:48.261855+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-31T19:11:55.194 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 flush_pg_stats 2026-03-31T19:11:55.194 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 flush_pg_stats 2026-03-31T19:11:55.194 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.2 flush_pg_stats 2026-03-31T19:11:55.194 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.3 flush_pg_stats 2026-03-31T19:11:55.194 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.4 flush_pg_stats 2026-03-31T19:11:55.194 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.5 flush_pg_stats 2026-03-31T19:11:55.322 INFO:teuthology.orchestra.run.vm01.stdout:47244640259 2026-03-31T19:11:55.323 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-31T19:11:55.326 INFO:teuthology.orchestra.run.vm01.stdout:51539607555 2026-03-31T19:11:55.326 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-31T19:11:55.335 INFO:teuthology.orchestra.run.vm01.stdout:47244640260 2026-03-31T19:11:55.336 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.3 2026-03-31T19:11:55.340 INFO:teuthology.orchestra.run.vm01.stdout:60129542147 2026-03-31T19:11:55.340 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.5 2026-03-31T19:11:55.358 INFO:teuthology.orchestra.run.vm01.stdout:47244640259 2026-03-31T19:11:55.358 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.4 2026-03-31T19:11:55.372 INFO:teuthology.orchestra.run.vm01.stdout:47244640259 2026-03-31T19:11:55.372 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-31T19:11:55.516 INFO:teuthology.orchestra.run.vm01.stdout:47244640258 2026-03-31T19:11:55.543 INFO:tasks.ceph.ceph_manager.ceph:need seq 47244640259 got 47244640258 for osd.1 2026-03-31T19:11:55.549 INFO:teuthology.orchestra.run.vm01.stdout:51539607554 2026-03-31T19:11:55.566 INFO:tasks.ceph.ceph_manager.ceph:need seq 51539607555 got 51539607554 for osd.0 2026-03-31T19:11:55.590 INFO:teuthology.orchestra.run.vm01.stdout:60129542146 2026-03-31T19:11:55.606 INFO:teuthology.orchestra.run.vm01.stdout:47244640259 2026-03-31T19:11:55.607 INFO:tasks.ceph.ceph_manager.ceph:need seq 60129542147 got 60129542146 for osd.5 2026-03-31T19:11:55.620 INFO:tasks.ceph.ceph_manager.ceph:need seq 47244640260 got 47244640259 for osd.3 2026-03-31T19:11:55.655 INFO:teuthology.orchestra.run.vm01.stdout:47244640258 2026-03-31T19:11:55.659 INFO:teuthology.orchestra.run.vm01.stdout:47244640258 2026-03-31T19:11:55.668 INFO:tasks.ceph.ceph_manager.ceph:need seq 47244640259 got 47244640258 for osd.2 2026-03-31T19:11:55.672 INFO:tasks.ceph.ceph_manager.ceph:need seq 47244640259 got 47244640258 for osd.4 2026-03-31T19:11:56.544 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-31T19:11:56.567 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-31T19:11:56.608 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.5 2026-03-31T19:11:56.621 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.3 2026-03-31T19:11:56.668 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-31T19:11:56.673 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.4 2026-03-31T19:11:56.708 INFO:teuthology.orchestra.run.vm01.stdout:47244640258 2026-03-31T19:11:56.724 INFO:tasks.ceph.ceph_manager.ceph:need seq 47244640259 got 47244640258 for osd.1 2026-03-31T19:11:56.776 INFO:teuthology.orchestra.run.vm01.stdout:51539607554 2026-03-31T19:11:56.790 INFO:tasks.ceph.ceph_manager.ceph:need seq 51539607555 got 51539607554 for osd.0 2026-03-31T19:11:56.838 INFO:teuthology.orchestra.run.vm01.stdout:60129542146 2026-03-31T19:11:56.844 INFO:teuthology.orchestra.run.vm01.stdout:47244640259 2026-03-31T19:11:56.853 INFO:tasks.ceph.ceph_manager.ceph:need seq 60129542147 got 60129542146 for osd.5 2026-03-31T19:11:56.863 INFO:tasks.ceph.ceph_manager.ceph:need seq 47244640260 got 47244640259 for osd.3 2026-03-31T19:11:56.888 INFO:teuthology.orchestra.run.vm01.stdout:47244640258 2026-03-31T19:11:56.902 INFO:tasks.ceph.ceph_manager.ceph:need seq 47244640259 got 47244640258 for osd.2 2026-03-31T19:11:56.905 INFO:teuthology.orchestra.run.vm01.stdout:47244640258 2026-03-31T19:11:56.918 INFO:tasks.ceph.ceph_manager.ceph:need seq 47244640259 got 47244640258 for osd.4 2026-03-31T19:11:57.725 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-31T19:11:57.791 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-31T19:11:57.854 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.5 2026-03-31T19:11:57.863 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.3 2026-03-31T19:11:57.900 INFO:teuthology.orchestra.run.vm01.stdout:47244640259 2026-03-31T19:11:57.903 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-31T19:11:57.918 INFO:tasks.ceph.ceph_manager.ceph:need seq 47244640259 got 47244640259 for osd.1 2026-03-31T19:11:57.918 DEBUG:teuthology.parallel:result is None 2026-03-31T19:11:57.918 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.4 2026-03-31T19:11:57.968 INFO:teuthology.orchestra.run.vm01.stdout:51539607555 2026-03-31T19:11:57.988 INFO:tasks.ceph.ceph_manager.ceph:need seq 51539607555 got 51539607555 for osd.0 2026-03-31T19:11:57.988 DEBUG:teuthology.parallel:result is None 2026-03-31T19:11:58.045 INFO:teuthology.orchestra.run.vm01.stdout:60129542148 2026-03-31T19:11:58.060 INFO:tasks.ceph.ceph_manager.ceph:need seq 60129542147 got 60129542148 for osd.5 2026-03-31T19:11:58.060 DEBUG:teuthology.parallel:result is None 2026-03-31T19:11:58.071 INFO:teuthology.orchestra.run.vm01.stdout:47244640260 2026-03-31T19:11:58.084 INFO:tasks.ceph.ceph_manager.ceph:need seq 47244640260 got 47244640260 for osd.3 2026-03-31T19:11:58.085 DEBUG:teuthology.parallel:result is None 2026-03-31T19:11:58.110 INFO:teuthology.orchestra.run.vm01.stdout:47244640259 2026-03-31T19:11:58.123 INFO:tasks.ceph.ceph_manager.ceph:need seq 47244640259 got 47244640259 for osd.2 2026-03-31T19:11:58.123 DEBUG:teuthology.parallel:result is None 2026-03-31T19:11:58.159 INFO:teuthology.orchestra.run.vm01.stdout:47244640259 2026-03-31T19:11:58.172 INFO:tasks.ceph.ceph_manager.ceph:need seq 47244640259 got 47244640259 for osd.4 2026-03-31T19:11:58.172 DEBUG:teuthology.parallel:result is None 2026-03-31T19:11:58.173 INFO:tasks.ceph.ceph_manager.ceph:waiting for clean 2026-03-31T19:11:58.173 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-31T19:11:58.369 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:58.369 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-31T19:11:58.382 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":18,"stamp":"2026-03-31T19:11:56.603070+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459299,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":59,"num_write_kb":586,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":35,"ondisk_log_size":35,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":18,"num_osds":6,"num_per_pool_osds":6,"num_per_pool_omap_osds":6,"kb":566231040,"kb_used":163012,"kb_used_data":1780,"kb_used_omap":48,"kb_used_meta":160847,"kb_avail":566068028,"statfs":{"total":579820584960,"available":579653660672,"internally_reserved":0,"allocated":1822720,"data_stored":1172842,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":49373,"internal_metadata":164708131},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[{"pgid":"2.7","version":"0'0","reported_seq":13,"reported_epoch":18,"state":"active+clean","last_fresh":"2026-03-31T19:11:53.655140+0000","last_change":"2026-03-31T19:11:52.464462+0000","last_active":"2026-03-31T19:11:53.655140+0000","last_peered":"2026-03-31T19:11:53.655140+0000","last_clean":"2026-03-31T19:11:53.655140+0000","last_became_active":"2026-03-31T19:11:52.464385+0000","last_became_peered":"2026-03-31T19:11:52.464385+0000","last_unstale":"2026-03-31T19:11:53.655140+0000","last_undegraded":"2026-03-31T19:11:53.655140+0000","last_fullsized":"2026-03-31T19:11:53.655140+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T06:21:54.340464+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2],"acting":[4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.6","version":"0'0","reported_seq":20,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.478640+0000","last_change":"2026-03-31T19:11:54.478711+0000","last_active":"2026-03-31T19:11:54.478640+0000","last_peered":"2026-03-31T19:11:54.478640+0000","last_clean":"2026-03-31T19:11:54.478640+0000","last_became_active":"2026-03-31T19:11:52.470925+0000","last_became_peered":"2026-03-31T19:11:52.470925+0000","last_unstale":"2026-03-31T19:11:54.478640+0000","last_undegraded":"2026-03-31T19:11:54.478640+0000","last_fullsized":"2026-03-31T19:11:54.478640+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T00:50:09.152646+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000115977,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2],"acting":[1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.5","version":"0'0","reported_seq":20,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:55.268167+0000","last_change":"2026-03-31T19:11:55.268223+0000","last_active":"2026-03-31T19:11:55.268167+0000","last_peered":"2026-03-31T19:11:55.268167+0000","last_clean":"2026-03-31T19:11:55.268167+0000","last_became_active":"2026-03-31T19:11:52.468346+0000","last_became_peered":"2026-03-31T19:11:52.468346+0000","last_unstale":"2026-03-31T19:11:55.268167+0000","last_undegraded":"2026-03-31T19:11:55.268167+0000","last_fullsized":"2026-03-31T19:11:55.268167+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T04:57:18.273386+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000138889,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0],"acting":[3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.4","version":"0'0","reported_seq":20,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.478730+0000","last_change":"2026-03-31T19:11:54.478813+0000","last_active":"2026-03-31T19:11:54.478730+0000","last_peered":"2026-03-31T19:11:54.478730+0000","last_clean":"2026-03-31T19:11:54.478730+0000","last_became_active":"2026-03-31T19:11:52.471594+0000","last_became_peered":"2026-03-31T19:11:52.471594+0000","last_unstale":"2026-03-31T19:11:54.478730+0000","last_undegraded":"2026-03-31T19:11:54.478730+0000","last_fullsized":"2026-03-31T19:11:54.478730+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T01:51:42.337886+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00019145800000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.2","version":"19'2","reported_seq":22,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.471564+0000","last_change":"2026-03-31T19:11:54.469656+0000","last_active":"2026-03-31T19:11:54.471564+0000","last_peered":"2026-03-31T19:11:54.471564+0000","last_clean":"2026-03-31T19:11:54.471564+0000","last_became_active":"2026-03-31T19:11:52.462970+0000","last_became_peered":"2026-03-31T19:11:52.462970+0000","last_unstale":"2026-03-31T19:11:54.471564+0000","last_undegraded":"2026-03-31T19:11:54.471564+0000","last_fullsized":"2026-03-31T19:11:54.471564+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-01T20:18:53.640113+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000174816,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.1","version":"0'0","reported_seq":20,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.478785+0000","last_change":"2026-03-31T19:11:54.478837+0000","last_active":"2026-03-31T19:11:54.478785+0000","last_peered":"2026-03-31T19:11:54.478785+0000","last_clean":"2026-03-31T19:11:54.478785+0000","last_became_active":"2026-03-31T19:11:52.471581+0000","last_became_peered":"2026-03-31T19:11:52.471581+0000","last_unstale":"2026-03-31T19:11:54.478785+0000","last_undegraded":"2026-03-31T19:11:54.478785+0000","last_fullsized":"2026-03-31T19:11:54.478785+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T03:14:45.480524+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00020954200000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"0'0","reported_seq":20,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.469597+0000","last_change":"2026-03-31T19:11:54.469597+0000","last_active":"2026-03-31T19:11:54.469597+0000","last_peered":"2026-03-31T19:11:54.469597+0000","last_clean":"2026-03-31T19:11:54.469597+0000","last_became_active":"2026-03-31T19:11:52.462306+0000","last_became_peered":"2026-03-31T19:11:52.462306+0000","last_unstale":"2026-03-31T19:11:54.469597+0000","last_undegraded":"2026-03-31T19:11:54.469597+0000","last_fullsized":"2026-03-31T19:11:54.469597+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T02:18:16.715821+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000113282,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3],"acting":[5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.3","version":"17'1","reported_seq":21,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.469647+0000","last_change":"2026-03-31T19:11:54.469647+0000","last_active":"2026-03-31T19:11:54.469647+0000","last_peered":"2026-03-31T19:11:54.469647+0000","last_clean":"2026-03-31T19:11:54.469647+0000","last_became_active":"2026-03-31T19:11:52.462938+0000","last_became_peered":"2026-03-31T19:11:52.462938+0000","last_unstale":"2026-03-31T19:11:54.469647+0000","last_undegraded":"2026-03-31T19:11:54.469647+0000","last_fullsized":"2026-03-31T19:11:54.469647+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T04:27:38.658998+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00011541599999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"16'32","reported_seq":61,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.469542+0000","last_change":"2026-03-31T19:11:51.461485+0000","last_active":"2026-03-31T19:11:54.469542+0000","last_peered":"2026-03-31T19:11:54.469542+0000","last_clean":"2026-03-31T19:11:54.469542+0000","last_became_active":"2026-03-31T19:11:51.461431+0000","last_became_peered":"2026-03-31T19:11:51.461431+0000","last_unstale":"2026-03-31T19:11:54.469542+0000","last_undegraded":"2026-03-31T19:11:54.469542+0000","last_fullsized":"2026-03-31T19:11:54.469542+0000","mapping_epoch":15,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":16,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:47.437529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:47.437529+0000","last_clean_scrub_stamp":"2026-03-31T19:11:47.437529+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T01:50:30.040955+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2],"acting":[5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":6},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":864256,"data_stored":857120,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":2,"acting":2,"num_store_stats":3}],"osd_stats":[{"osd":5,"up_from":14,"seq":60129542148,"num_pgs":4,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27448,"kb_used_data":616,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344392,"statfs":{"total":96636764160,"available":96608657408,"internally_reserved":0,"allocated":630784,"data_stored":514721,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":7471,"internal_metadata":27452113},"hb_peers":[0,1,2,3,4],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":12,"seq":51539607555,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":26984,"kb_used_data":152,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344856,"statfs":{"total":96636764160,"available":96609132544,"internally_reserved":0,"allocated":155648,"data_stored":51210,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8121,"internal_metadata":27451463},"hb_peers":[1,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":11,"seq":47244640259,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27016,"kb_used_data":152,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344824,"statfs":{"total":96636764160,"available":96609099776,"internally_reserved":0,"allocated":155648,"data_stored":51210,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8771,"internal_metadata":27450813},"hb_peers":[0,1,2,3,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":11,"seq":47244640260,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27016,"kb_used_data":152,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344824,"statfs":{"total":96636764160,"available":96609099776,"internally_reserved":0,"allocated":155648,"data_stored":51210,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":7468,"internal_metadata":27452116},"hb_peers":[0,1,2,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":11,"seq":47244640259,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27536,"kb_used_data":544,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344304,"statfs":{"total":96636764160,"available":96608567296,"internally_reserved":0,"allocated":557056,"data_stored":449050,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8771,"internal_metadata":27450813},"hb_peers":[0,1,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":11,"seq":47244640259,"num_pgs":5,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27012,"kb_used_data":164,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344828,"statfs":{"total":96636764160,"available":96609103872,"internally_reserved":0,"allocated":167936,"data_stored":55441,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8771,"internal_metadata":27450813},"hb_peers":[0,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-31T19:11:58.382 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-31T19:11:58.535 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:58.535 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-31T19:11:58.549 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":18,"stamp":"2026-03-31T19:11:56.603070+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459299,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":59,"num_write_kb":586,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":35,"ondisk_log_size":35,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":18,"num_osds":6,"num_per_pool_osds":6,"num_per_pool_omap_osds":6,"kb":566231040,"kb_used":163012,"kb_used_data":1780,"kb_used_omap":48,"kb_used_meta":160847,"kb_avail":566068028,"statfs":{"total":579820584960,"available":579653660672,"internally_reserved":0,"allocated":1822720,"data_stored":1172842,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":49373,"internal_metadata":164708131},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"0.000000"},"pg_stats":[{"pgid":"2.7","version":"0'0","reported_seq":13,"reported_epoch":18,"state":"active+clean","last_fresh":"2026-03-31T19:11:53.655140+0000","last_change":"2026-03-31T19:11:52.464462+0000","last_active":"2026-03-31T19:11:53.655140+0000","last_peered":"2026-03-31T19:11:53.655140+0000","last_clean":"2026-03-31T19:11:53.655140+0000","last_became_active":"2026-03-31T19:11:52.464385+0000","last_became_peered":"2026-03-31T19:11:52.464385+0000","last_unstale":"2026-03-31T19:11:53.655140+0000","last_undegraded":"2026-03-31T19:11:53.655140+0000","last_fullsized":"2026-03-31T19:11:53.655140+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T06:21:54.340464+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2],"acting":[4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]},{"pgid":"2.6","version":"0'0","reported_seq":20,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.478640+0000","last_change":"2026-03-31T19:11:54.478711+0000","last_active":"2026-03-31T19:11:54.478640+0000","last_peered":"2026-03-31T19:11:54.478640+0000","last_clean":"2026-03-31T19:11:54.478640+0000","last_became_active":"2026-03-31T19:11:52.470925+0000","last_became_peered":"2026-03-31T19:11:52.470925+0000","last_unstale":"2026-03-31T19:11:54.478640+0000","last_undegraded":"2026-03-31T19:11:54.478640+0000","last_fullsized":"2026-03-31T19:11:54.478640+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T00:50:09.152646+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000115977,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2],"acting":[1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.5","version":"0'0","reported_seq":20,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:55.268167+0000","last_change":"2026-03-31T19:11:55.268223+0000","last_active":"2026-03-31T19:11:55.268167+0000","last_peered":"2026-03-31T19:11:55.268167+0000","last_clean":"2026-03-31T19:11:55.268167+0000","last_became_active":"2026-03-31T19:11:52.468346+0000","last_became_peered":"2026-03-31T19:11:52.468346+0000","last_unstale":"2026-03-31T19:11:55.268167+0000","last_undegraded":"2026-03-31T19:11:55.268167+0000","last_fullsized":"2026-03-31T19:11:55.268167+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T04:57:18.273386+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000138889,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0],"acting":[3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.4","version":"0'0","reported_seq":20,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.478730+0000","last_change":"2026-03-31T19:11:54.478813+0000","last_active":"2026-03-31T19:11:54.478730+0000","last_peered":"2026-03-31T19:11:54.478730+0000","last_clean":"2026-03-31T19:11:54.478730+0000","last_became_active":"2026-03-31T19:11:52.471594+0000","last_became_peered":"2026-03-31T19:11:52.471594+0000","last_unstale":"2026-03-31T19:11:54.478730+0000","last_undegraded":"2026-03-31T19:11:54.478730+0000","last_fullsized":"2026-03-31T19:11:54.478730+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T01:51:42.337886+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00019145800000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.2","version":"19'2","reported_seq":22,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.471564+0000","last_change":"2026-03-31T19:11:54.469656+0000","last_active":"2026-03-31T19:11:54.471564+0000","last_peered":"2026-03-31T19:11:54.471564+0000","last_clean":"2026-03-31T19:11:54.471564+0000","last_became_active":"2026-03-31T19:11:52.462970+0000","last_became_peered":"2026-03-31T19:11:52.462970+0000","last_unstale":"2026-03-31T19:11:54.471564+0000","last_undegraded":"2026-03-31T19:11:54.471564+0000","last_fullsized":"2026-03-31T19:11:54.471564+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-01T20:18:53.640113+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000174816,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.1","version":"0'0","reported_seq":20,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.478785+0000","last_change":"2026-03-31T19:11:54.478837+0000","last_active":"2026-03-31T19:11:54.478785+0000","last_peered":"2026-03-31T19:11:54.478785+0000","last_clean":"2026-03-31T19:11:54.478785+0000","last_became_active":"2026-03-31T19:11:52.471581+0000","last_became_peered":"2026-03-31T19:11:52.471581+0000","last_unstale":"2026-03-31T19:11:54.478785+0000","last_undegraded":"2026-03-31T19:11:54.478785+0000","last_fullsized":"2026-03-31T19:11:54.478785+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T03:14:45.480524+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00020954200000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"0'0","reported_seq":20,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.469597+0000","last_change":"2026-03-31T19:11:54.469597+0000","last_active":"2026-03-31T19:11:54.469597+0000","last_peered":"2026-03-31T19:11:54.469597+0000","last_clean":"2026-03-31T19:11:54.469597+0000","last_became_active":"2026-03-31T19:11:52.462306+0000","last_became_peered":"2026-03-31T19:11:52.462306+0000","last_unstale":"2026-03-31T19:11:54.469597+0000","last_undegraded":"2026-03-31T19:11:54.469597+0000","last_fullsized":"2026-03-31T19:11:54.469597+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T02:18:16.715821+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000113282,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3],"acting":[5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.3","version":"17'1","reported_seq":21,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.469647+0000","last_change":"2026-03-31T19:11:54.469647+0000","last_active":"2026-03-31T19:11:54.469647+0000","last_peered":"2026-03-31T19:11:54.469647+0000","last_clean":"2026-03-31T19:11:54.469647+0000","last_became_active":"2026-03-31T19:11:52.462938+0000","last_became_peered":"2026-03-31T19:11:52.462938+0000","last_unstale":"2026-03-31T19:11:54.469647+0000","last_undegraded":"2026-03-31T19:11:54.469647+0000","last_fullsized":"2026-03-31T19:11:54.469647+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T04:27:38.658998+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00011541599999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"16'32","reported_seq":61,"reported_epoch":19,"state":"active+clean","last_fresh":"2026-03-31T19:11:54.469542+0000","last_change":"2026-03-31T19:11:51.461485+0000","last_active":"2026-03-31T19:11:54.469542+0000","last_peered":"2026-03-31T19:11:54.469542+0000","last_clean":"2026-03-31T19:11:54.469542+0000","last_became_active":"2026-03-31T19:11:51.461431+0000","last_became_peered":"2026-03-31T19:11:51.461431+0000","last_unstale":"2026-03-31T19:11:54.469542+0000","last_undegraded":"2026-03-31T19:11:54.469542+0000","last_fullsized":"2026-03-31T19:11:54.469542+0000","mapping_epoch":15,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":16,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:47.437529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:47.437529+0000","last_clean_scrub_stamp":"2026-03-31T19:11:47.437529+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T01:50:30.040955+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2],"acting":[5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":6},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":864256,"data_stored":857120,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":2,"acting":2,"num_store_stats":3}],"osd_stats":[{"osd":5,"up_from":14,"seq":60129542148,"num_pgs":4,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27448,"kb_used_data":616,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344392,"statfs":{"total":96636764160,"available":96608657408,"internally_reserved":0,"allocated":630784,"data_stored":514721,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":7471,"internal_metadata":27452113},"hb_peers":[0,1,2,3,4],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":12,"seq":51539607555,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":26984,"kb_used_data":152,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344856,"statfs":{"total":96636764160,"available":96609132544,"internally_reserved":0,"allocated":155648,"data_stored":51210,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8121,"internal_metadata":27451463},"hb_peers":[1,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":11,"seq":47244640259,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27016,"kb_used_data":152,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344824,"statfs":{"total":96636764160,"available":96609099776,"internally_reserved":0,"allocated":155648,"data_stored":51210,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8771,"internal_metadata":27450813},"hb_peers":[0,1,2,3,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":3,"up_from":11,"seq":47244640260,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27016,"kb_used_data":152,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344824,"statfs":{"total":96636764160,"available":96609099776,"internally_reserved":0,"allocated":155648,"data_stored":51210,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":7468,"internal_metadata":27452116},"hb_peers":[0,1,2,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":11,"seq":47244640259,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27536,"kb_used_data":544,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344304,"statfs":{"total":96636764160,"available":96608567296,"internally_reserved":0,"allocated":557056,"data_stored":449050,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8771,"internal_metadata":27450813},"hb_peers":[0,1,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":11,"seq":47244640259,"num_pgs":5,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27012,"kb_used_data":164,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344828,"statfs":{"total":96636764160,"available":96609103872,"internally_reserved":0,"allocated":167936,"data_stored":55441,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8771,"internal_metadata":27450813},"hb_peers":[0,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-31T19:11:58.549 INFO:tasks.ceph.ceph_manager.ceph:clean! 2026-03-31T19:11:58.549 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-31T19:11:58.549 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy 2026-03-31T19:11:58.549 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph health --format=json 2026-03-31T19:11:58.720 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T19:11:58.720 INFO:teuthology.orchestra.run.vm01.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-31T19:11:58.733 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy done 2026-03-31T19:11:58.733 INFO:teuthology.run_tasks:Running task workunit... 2026-03-31T19:11:58.737 INFO:tasks.workunit:Pulling workunits from ref 0392f78529848ec72469e8e431875cb98d3a5fb4 2026-03-31T19:11:58.737 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-31T19:11:58.737 DEBUG:teuthology.orchestra.run.vm01:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-31T19:11:58.741 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T19:11:58.741 INFO:teuthology.orchestra.run.vm01.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-31T19:11:58.741 DEBUG:teuthology.orchestra.run.vm01:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-31T19:11:58.788 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-31T19:11:58.788 DEBUG:teuthology.orchestra.run.vm01:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-31T19:11:58.832 INFO:tasks.workunit:timeout=3h 2026-03-31T19:11:58.832 INFO:tasks.workunit:cleanup=True 2026-03-31T19:11:58.833 DEBUG:teuthology.orchestra.run.vm01:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 0392f78529848ec72469e8e431875cb98d3a5fb4 2026-03-31T19:11:58.877 INFO:tasks.workunit.client.0.vm01.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-31T19:12:34.642 INFO:tasks.workunit.client.0.vm01.stderr:Note: switching to '0392f78529848ec72469e8e431875cb98d3a5fb4'. 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr:state without impacting any branches by switching back to a branch. 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr: git switch -c 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr:Or undo this operation with: 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr: git switch - 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-31T19:12:34.643 INFO:tasks.workunit.client.0.vm01.stderr:HEAD is now at 0392f785298 qa/tasks/keystone: restart mariadb for rocky and alma linux too 2026-03-31T19:12:34.650 DEBUG:teuthology.orchestra.run.vm01:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-31T19:12:34.696 INFO:tasks.workunit.client.0.vm01.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-31T19:12:34.697 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-31T19:12:34.697 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-31T19:12:34.737 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-31T19:12:34.768 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-31T19:12:34.791 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-31T19:12:34.792 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-31T19:12:34.792 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-31T19:12:34.817 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-31T19:12:34.819 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-31T19:12:34.819 DEBUG:teuthology.orchestra.run.vm01:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-31T19:12:34.864 INFO:tasks.workunit:Running workunits matching rados/ec-esb-fio.sh on client.0... 2026-03-31T19:12:34.865 INFO:tasks.workunit:Running workunit rados/ec-esb-fio.sh... 2026-03-31T19:12:34.865 DEBUG:teuthology.orchestra.run.vm01:workunit test rados/ec-esb-fio.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0392f78529848ec72469e8e431875cb98d3a5fb4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rados/ec-esb-fio.sh 2026-03-31T19:12:34.911 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -f /etc/debian_version ]] 2026-03-31T19:12:34.911 INFO:tasks.workunit.client.0.vm01.stderr:+ sudo apt-get update 2026-03-31T19:12:34.980 INFO:tasks.workunit.client.0.vm01.stdout:Hit:1 http://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-31T19:12:34.982 INFO:tasks.workunit.client.0.vm01.stdout:Hit:2 http://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-31T19:12:34.990 INFO:tasks.workunit.client.0.vm01.stdout:Hit:3 http://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-31T19:12:35.062 INFO:tasks.workunit.client.0.vm01.stdout:Hit:4 http://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-31T19:12:35.411 INFO:tasks.workunit.client.0.vm01.stdout:Ign:5 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy InRelease 2026-03-31T19:12:35.523 INFO:tasks.workunit.client.0.vm01.stdout:Hit:6 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy Release 2026-03-31T19:12:35.635 INFO:tasks.workunit.client.0.vm01.stdout:Ign:7 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-31T19:12:36.331 INFO:tasks.workunit.client.0.vm01.stdout:Reading package lists... 2026-03-31T19:12:36.343 INFO:tasks.workunit.client.0.vm01.stderr:+ sudo apt-get install -y git gcc make librados-dev librbd-dev zlib1g-dev libaio-dev 2026-03-31T19:12:36.374 INFO:tasks.workunit.client.0.vm01.stdout:Reading package lists... 2026-03-31T19:12:36.535 INFO:tasks.workunit.client.0.vm01.stdout:Building dependency tree... 2026-03-31T19:12:36.535 INFO:tasks.workunit.client.0.vm01.stdout:Reading state information... 2026-03-31T19:12:36.650 INFO:tasks.workunit.client.0.vm01.stdout:gcc is already the newest version (4:11.2.0-1ubuntu1). 2026-03-31T19:12:36.651 INFO:tasks.workunit.client.0.vm01.stdout:gcc set to manually installed. 2026-03-31T19:12:36.651 INFO:tasks.workunit.client.0.vm01.stdout:make is already the newest version (4.3-4.1build1). 2026-03-31T19:12:36.651 INFO:tasks.workunit.client.0.vm01.stdout:make set to manually installed. 2026-03-31T19:12:36.651 INFO:tasks.workunit.client.0.vm01.stdout:git is already the newest version (1:2.34.1-1ubuntu1.17). 2026-03-31T19:12:36.651 INFO:tasks.workunit.client.0.vm01.stdout:zlib1g-dev is already the newest version (1:1.2.11.dfsg-2ubuntu9.2). 2026-03-31T19:12:36.651 INFO:tasks.workunit.client.0.vm01.stdout:zlib1g-dev set to manually installed. 2026-03-31T19:12:36.651 INFO:tasks.workunit.client.0.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T19:12:36.651 INFO:tasks.workunit.client.0.vm01.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T19:12:36.651 INFO:tasks.workunit.client.0.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T19:12:36.651 INFO:tasks.workunit.client.0.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T19:12:36.669 INFO:tasks.workunit.client.0.vm01.stdout:The following NEW packages will be installed: 2026-03-31T19:12:36.669 INFO:tasks.workunit.client.0.vm01.stdout: libaio-dev librados-dev librbd-dev 2026-03-31T19:12:36.763 INFO:tasks.workunit.client.0.vm01.stdout:0 upgraded, 3 newly installed, 0 to remove and 50 not upgraded. 2026-03-31T19:12:36.763 INFO:tasks.workunit.client.0.vm01.stdout:Need to get 318 kB of archives. 2026-03-31T19:12:36.763 INFO:tasks.workunit.client.0.vm01.stdout:After this operation, 1367 kB of additional disk space will be used. 2026-03-31T19:12:36.763 INFO:tasks.workunit.client.0.vm01.stdout:Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 libaio-dev amd64 0.3.112-13build1 [21.2 kB] 2026-03-31T19:12:37.266 INFO:tasks.workunit.client.0.vm01.stdout:Get:2 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librados-dev amd64 20.2.0-721-g5bb32787-1jammy [266 kB] 2026-03-31T19:12:37.697 INFO:tasks.workunit.client.0.vm01.stdout:Get:3 https://3.chacra.ceph.com/r/ceph/tentacle-release/5bb3278730741031382ca9c3dc9d221a942e06a2/ubuntu/jammy/flavors/default jammy/main amd64 librbd-dev amd64 20.2.0-721-g5bb32787-1jammy [31.0 kB] 2026-03-31T19:12:37.823 INFO:tasks.workunit.client.0.vm01.stderr:debconf: unable to initialize frontend: Dialog 2026-03-31T19:12:37.824 INFO:tasks.workunit.client.0.vm01.stderr:debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.) 2026-03-31T19:12:37.824 INFO:tasks.workunit.client.0.vm01.stderr:debconf: falling back to frontend: Readline 2026-03-31T19:12:37.829 INFO:tasks.workunit.client.0.vm01.stderr:debconf: unable to initialize frontend: Readline 2026-03-31T19:12:37.829 INFO:tasks.workunit.client.0.vm01.stderr:debconf: (This frontend requires a controlling tty.) 2026-03-31T19:12:37.829 INFO:tasks.workunit.client.0.vm01.stderr:debconf: falling back to frontend: Teletype 2026-03-31T19:12:37.831 INFO:tasks.workunit.client.0.vm01.stderr:dpkg-preconfigure: unable to re-open stdin: 2026-03-31T19:12:37.854 INFO:tasks.workunit.client.0.vm01.stdout:Fetched 318 kB in 1s (310 kB/s) 2026-03-31T19:12:37.866 INFO:tasks.workunit.client.0.vm01.stdout:Selecting previously unselected package libaio-dev:amd64. 2026-03-31T19:12:37.895 INFO:tasks.workunit.client.0.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 126150 files and directories currently installed.) 2026-03-31T19:12:37.897 INFO:tasks.workunit.client.0.vm01.stdout:Preparing to unpack .../libaio-dev_0.3.112-13build1_amd64.deb ... 2026-03-31T19:12:37.898 INFO:tasks.workunit.client.0.vm01.stdout:Unpacking libaio-dev:amd64 (0.3.112-13build1) ... 2026-03-31T19:12:37.917 INFO:tasks.workunit.client.0.vm01.stdout:Selecting previously unselected package librados-dev. 2026-03-31T19:12:37.924 INFO:tasks.workunit.client.0.vm01.stdout:Preparing to unpack .../librados-dev_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:12:37.925 INFO:tasks.workunit.client.0.vm01.stdout:Unpacking librados-dev (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:12:37.943 INFO:tasks.workunit.client.0.vm01.stdout:Selecting previously unselected package librbd-dev. 2026-03-31T19:12:37.949 INFO:tasks.workunit.client.0.vm01.stdout:Preparing to unpack .../librbd-dev_20.2.0-721-g5bb32787-1jammy_amd64.deb ... 2026-03-31T19:12:37.950 INFO:tasks.workunit.client.0.vm01.stdout:Unpacking librbd-dev (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:12:37.979 INFO:tasks.workunit.client.0.vm01.stdout:Setting up librados-dev (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:12:37.982 INFO:tasks.workunit.client.0.vm01.stdout:Setting up libaio-dev:amd64 (0.3.112-13build1) ... 2026-03-31T19:12:37.985 INFO:tasks.workunit.client.0.vm01.stdout:Setting up librbd-dev (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T19:12:37.988 INFO:tasks.workunit.client.0.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-31T19:12:38.371 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:12:38.372 INFO:tasks.workunit.client.0.vm01.stdout:Running kernel seems to be up-to-date. 2026-03-31T19:12:38.372 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:12:38.372 INFO:tasks.workunit.client.0.vm01.stdout:Services to be restarted: 2026-03-31T19:12:38.374 INFO:tasks.workunit.client.0.vm01.stdout: systemctl restart apache-htcacheclean.service 2026-03-31T19:12:38.380 INFO:tasks.workunit.client.0.vm01.stdout: systemctl restart rsyslog.service 2026-03-31T19:12:38.383 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:12:38.383 INFO:tasks.workunit.client.0.vm01.stdout:Service restarts being deferred: 2026-03-31T19:12:38.383 INFO:tasks.workunit.client.0.vm01.stdout: systemctl restart networkd-dispatcher.service 2026-03-31T19:12:38.383 INFO:tasks.workunit.client.0.vm01.stdout: systemctl restart unattended-upgrades.service 2026-03-31T19:12:38.383 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:12:38.383 INFO:tasks.workunit.client.0.vm01.stdout:No containers need to be restarted. 2026-03-31T19:12:38.383 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:12:38.383 INFO:tasks.workunit.client.0.vm01.stdout:No user sessions are running outdated binaries. 2026-03-31T19:12:38.383 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:12:38.383 INFO:tasks.workunit.client.0.vm01.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-31T19:12:39.224 INFO:tasks.workunit.client.0.vm01.stderr:+ git clone -b master https://github.com/axboe/fio.git /home/ubuntu/cephtest/fio 2026-03-31T19:12:39.224 INFO:tasks.workunit.client.0.vm01.stderr:Cloning into '/home/ubuntu/cephtest/fio'... 2026-03-31T19:12:45.128 INFO:tasks.workunit.client.0.vm01.stderr:+ cd /home/ubuntu/cephtest/fio 2026-03-31T19:12:45.128 INFO:tasks.workunit.client.0.vm01.stderr:+ ./configure 2026-03-31T19:12:45.196 INFO:tasks.workunit.client.0.vm01.stdout:Operating system Linux 2026-03-31T19:12:45.196 INFO:tasks.workunit.client.0.vm01.stdout:CPU x86_64 2026-03-31T19:12:45.196 INFO:tasks.workunit.client.0.vm01.stdout:Big endian no 2026-03-31T19:12:45.196 INFO:tasks.workunit.client.0.vm01.stdout:Compiler gcc 2026-03-31T19:12:45.196 INFO:tasks.workunit.client.0.vm01.stdout:Cross compile no 2026-03-31T19:12:45.196 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:12:45.196 INFO:tasks.workunit.client.0.vm01.stdout:Static build no 2026-03-31T19:12:45.238 INFO:tasks.workunit.client.0.vm01.stdout:Wordsize 64 2026-03-31T19:12:45.259 INFO:tasks.workunit.client.0.vm01.stdout:zlib yes 2026-03-31T19:12:45.266 INFO:tasks.workunit.client.0.vm01.stdout:fcntl(F_FULLFSYNC) no 2026-03-31T19:12:45.312 INFO:tasks.workunit.client.0.vm01.stdout:Linux AIO support yes 2026-03-31T19:12:45.312 INFO:tasks.workunit.client.0.vm01.stdout:Linux AIO support rw flags yes 2026-03-31T19:12:45.330 INFO:tasks.workunit.client.0.vm01.stdout:POSIX AIO support yes 2026-03-31T19:12:45.330 INFO:tasks.workunit.client.0.vm01.stdout:POSIX AIO support needs -lrt no 2026-03-31T19:12:45.349 INFO:tasks.workunit.client.0.vm01.stdout:POSIX AIO fsync yes 2026-03-31T19:12:45.368 INFO:tasks.workunit.client.0.vm01.stdout:POSIX pshared support yes 2026-03-31T19:12:45.387 INFO:tasks.workunit.client.0.vm01.stdout:pthread_condattr_setclock() yes 2026-03-31T19:12:45.407 INFO:tasks.workunit.client.0.vm01.stdout:pthread_sigmask() yes 2026-03-31T19:12:45.429 INFO:tasks.workunit.client.0.vm01.stdout:pthread_getaffinity_np() yes 2026-03-31T19:12:45.434 INFO:tasks.workunit.client.0.vm01.stdout:Solaris AIO support no 2026-03-31T19:12:45.452 INFO:tasks.workunit.client.0.vm01.stdout:__sync_fetch_and_add yes 2026-03-31T19:12:45.470 INFO:tasks.workunit.client.0.vm01.stdout:__sync_synchronize yes 2026-03-31T19:12:45.488 INFO:tasks.workunit.client.0.vm01.stdout:__sync_val_compare_and_swap yes 2026-03-31T19:12:45.492 INFO:tasks.workunit.client.0.vm01.stdout:libverbs no 2026-03-31T19:12:45.498 INFO:tasks.workunit.client.0.vm01.stdout:rdmacm no 2026-03-31T19:12:45.517 INFO:tasks.workunit.client.0.vm01.stdout:asprintf() yes 2026-03-31T19:12:45.537 INFO:tasks.workunit.client.0.vm01.stdout:vasprintf() yes 2026-03-31T19:12:45.557 INFO:tasks.workunit.client.0.vm01.stdout:Linux fallocate yes 2026-03-31T19:12:45.576 INFO:tasks.workunit.client.0.vm01.stdout:POSIX fadvise yes 2026-03-31T19:12:45.596 INFO:tasks.workunit.client.0.vm01.stdout:POSIX fallocate yes 2026-03-31T19:12:45.614 INFO:tasks.workunit.client.0.vm01.stdout:sched_setaffinity(3 arg) yes 2026-03-31T19:12:45.614 INFO:tasks.workunit.client.0.vm01.stdout:sched_setaffinity(2 arg) no 2026-03-31T19:12:45.633 INFO:tasks.workunit.client.0.vm01.stdout:clock_gettime yes 2026-03-31T19:12:45.653 INFO:tasks.workunit.client.0.vm01.stdout:CLOCK_MONOTONIC yes 2026-03-31T19:12:45.672 INFO:tasks.workunit.client.0.vm01.stdout:gettimeofday yes 2026-03-31T19:12:45.692 INFO:tasks.workunit.client.0.vm01.stdout:fdatasync yes 2026-03-31T19:12:45.710 INFO:tasks.workunit.client.0.vm01.stdout:pipe() yes 2026-03-31T19:12:45.729 INFO:tasks.workunit.client.0.vm01.stdout:pipe2() yes 2026-03-31T19:12:45.749 INFO:tasks.workunit.client.0.vm01.stdout:pread() yes 2026-03-31T19:12:45.770 INFO:tasks.workunit.client.0.vm01.stdout:sync_file_range yes 2026-03-31T19:12:45.792 INFO:tasks.workunit.client.0.vm01.stdout:syncfs yes 2026-03-31T19:12:45.796 INFO:tasks.workunit.client.0.vm01.stdout:ASharedMemory_create no 2026-03-31T19:12:45.802 INFO:tasks.workunit.client.0.vm01.stdout:EXT4 move extent yes 2026-03-31T19:12:45.821 INFO:tasks.workunit.client.0.vm01.stdout:Linux splice(2) yes 2026-03-31T19:12:45.825 INFO:tasks.workunit.client.0.vm01.stdout:libnuma no 2026-03-31T19:12:45.844 INFO:tasks.workunit.client.0.vm01.stdout:strsep yes 2026-03-31T19:12:45.862 INFO:tasks.workunit.client.0.vm01.stdout:strcasestr yes 2026-03-31T19:12:45.870 INFO:tasks.workunit.client.0.vm01.stdout:strlcat no 2026-03-31T19:12:45.890 INFO:tasks.workunit.client.0.vm01.stdout:getopt_long_only() yes 2026-03-31T19:12:45.912 INFO:tasks.workunit.client.0.vm01.stdout:inet_aton yes 2026-03-31T19:12:45.931 INFO:tasks.workunit.client.0.vm01.stdout:socklen_t yes 2026-03-31T19:12:45.951 INFO:tasks.workunit.client.0.vm01.stdout:__thread yes 2026-03-31T19:12:45.971 INFO:tasks.workunit.client.0.vm01.stdout:RUSAGE_THREAD yes 2026-03-31T19:12:45.990 INFO:tasks.workunit.client.0.vm01.stdout:SCHED_IDLE yes 2026-03-31T19:12:46.011 INFO:tasks.workunit.client.0.vm01.stdout:TCP_NODELAY yes 2026-03-31T19:12:46.030 INFO:tasks.workunit.client.0.vm01.stdout:vsock yes 2026-03-31T19:12:46.052 INFO:tasks.workunit.client.0.vm01.stdout:Net engine window_size yes 2026-03-31T19:12:46.074 INFO:tasks.workunit.client.0.vm01.stdout:TCP_MAXSEG yes 2026-03-31T19:12:46.096 INFO:tasks.workunit.client.0.vm01.stdout:RLIMIT_MEMLOCK yes 2026-03-31T19:12:46.119 INFO:tasks.workunit.client.0.vm01.stdout:pwritev/preadv yes 2026-03-31T19:12:46.140 INFO:tasks.workunit.client.0.vm01.stdout:pwritev2/preadv2 yes 2026-03-31T19:12:46.162 INFO:tasks.workunit.client.0.vm01.stdout:IPv6 helpers yes 2026-03-31T19:12:46.170 INFO:tasks.workunit.client.0.vm01.stdout:http engine no 2026-03-31T19:12:46.239 INFO:tasks.workunit.client.0.vm01.stdout:Rados engine yes 2026-03-31T19:12:46.321 INFO:tasks.workunit.client.0.vm01.stdout:Rados Block Device engine yes 2026-03-31T19:12:46.402 INFO:tasks.workunit.client.0.vm01.stdout:rbd_poll yes 2026-03-31T19:12:46.482 INFO:tasks.workunit.client.0.vm01.stdout:rbd_invalidate_cache yes 2026-03-31T19:12:46.565 INFO:tasks.workunit.client.0.vm01.stdout:rbd_encryption_load yes 2026-03-31T19:12:46.584 INFO:tasks.workunit.client.0.vm01.stdout:setvbuf yes 2026-03-31T19:12:46.588 INFO:tasks.workunit.client.0.vm01.stdout:Gluster API engine no 2026-03-31T19:12:46.596 INFO:tasks.workunit.client.0.vm01.stdout:s390_z196_facilities no 2026-03-31T19:12:46.596 INFO:tasks.workunit.client.0.vm01.stdout:HDFS engine no 2026-03-31T19:12:46.616 INFO:tasks.workunit.client.0.vm01.stdout:MTD yes 2026-03-31T19:12:46.620 INFO:tasks.workunit.client.0.vm01.stdout:libpmem no 2026-03-31T19:12:46.620 INFO:tasks.workunit.client.0.vm01.stdout:libpmem1_5 no 2026-03-31T19:12:46.624 INFO:tasks.workunit.client.0.vm01.stdout:libpmem2 no 2026-03-31T19:12:46.624 INFO:tasks.workunit.client.0.vm01.stdout:PMDK dev-dax engine no 2026-03-31T19:12:46.624 INFO:tasks.workunit.client.0.vm01.stdout:PMDK libpmem engine no 2026-03-31T19:12:46.628 INFO:tasks.workunit.client.0.vm01.stdout:DDN's Infinite Memory Engine no 2026-03-31T19:12:46.628 INFO:tasks.workunit.client.0.vm01.stdout:iscsi engine no 2026-03-31T19:12:46.628 INFO:tasks.workunit.client.0.vm01.stdout:NBD engine no 2026-03-31T19:12:46.633 INFO:tasks.workunit.client.0.vm01.stdout:DAOS File System (dfs) Engine no 2026-03-31T19:12:46.633 INFO:tasks.workunit.client.0.vm01.stdout:NFS engine no 2026-03-31T19:12:46.633 INFO:tasks.workunit.client.0.vm01.stdout:lex/yacc for arithmetic no 2026-03-31T19:12:46.652 INFO:tasks.workunit.client.0.vm01.stdout:getmntent yes 2026-03-31T19:12:46.663 INFO:tasks.workunit.client.0.vm01.stdout:getmntinfo no 2026-03-31T19:12:46.688 INFO:tasks.workunit.client.0.vm01.stdout:Static Assert yes 2026-03-31T19:12:46.705 INFO:tasks.workunit.client.0.vm01.stdout:bool yes 2026-03-31T19:12:46.725 INFO:tasks.workunit.client.0.vm01.stdout:strndup yes 2026-03-31T19:12:46.746 INFO:tasks.workunit.client.0.vm01.stdout:Valgrind headers yes 2026-03-31T19:12:46.763 INFO:tasks.workunit.client.0.vm01.stdout:Zoned block device support yes 2026-03-31T19:12:46.780 INFO:tasks.workunit.client.0.vm01.stdout:Zoned block device capacity yes 2026-03-31T19:12:46.784 INFO:tasks.workunit.client.0.vm01.stdout:libzbc engine no 2026-03-31T19:12:46.789 INFO:tasks.workunit.client.0.vm01.stdout:NVMe uring command support no 2026-03-31T19:12:46.789 INFO:tasks.workunit.client.0.vm01.stdout:xnvme missing pkg-config, can't check xnvme version 2026-03-31T19:12:46.789 INFO:tasks.workunit.client.0.vm01.stdout:xnvme engine no 2026-03-31T19:12:46.793 INFO:tasks.workunit.client.0.vm01.stdout:isal no 2026-03-31T19:12:46.793 INFO:tasks.workunit.client.0.vm01.stdout:isal CRC64 no 2026-03-31T19:12:46.794 INFO:tasks.workunit.client.0.vm01.stdout:blkio missing pkg-config, can't check blkio version 2026-03-31T19:12:46.794 INFO:tasks.workunit.client.0.vm01.stdout:libblkio engine no 2026-03-31T19:12:46.794 INFO:tasks.workunit.client.0.vm01.stdout:march_armv8_a_crc_crypto no 2026-03-31T19:12:46.794 INFO:tasks.workunit.client.0.vm01.stdout:cuda no 2026-03-31T19:12:46.794 INFO:tasks.workunit.client.0.vm01.stdout:libcufile no 2026-03-31T19:12:46.814 INFO:tasks.workunit.client.0.vm01.stdout:Build march=native yes 2026-03-31T19:12:46.818 INFO:tasks.workunit.client.0.vm01.stdout:CUnit no 2026-03-31T19:12:46.836 INFO:tasks.workunit.client.0.vm01.stdout:__kernel_rwf_t yes 2026-03-31T19:12:46.853 INFO:tasks.workunit.client.0.vm01.stdout:-Wimplicit-fallthrough=2 yes 2026-03-31T19:12:46.872 INFO:tasks.workunit.client.0.vm01.stdout:-Wno-stringop-truncation yes 2026-03-31T19:12:46.890 INFO:tasks.workunit.client.0.vm01.stdout:MADV_HUGEPAGE yes 2026-03-31T19:12:46.909 INFO:tasks.workunit.client.0.vm01.stdout:gettid yes 2026-03-31T19:12:46.930 INFO:tasks.workunit.client.0.vm01.stdout:statx(2)/libc yes 2026-03-31T19:12:46.951 INFO:tasks.workunit.client.0.vm01.stdout:statx(2)/syscall yes 2026-03-31T19:12:46.952 INFO:tasks.workunit.client.0.vm01.stdout:Windows PDB generation no 2026-03-31T19:12:46.971 INFO:tasks.workunit.client.0.vm01.stdout:timerfd_create yes 2026-03-31T19:12:46.971 INFO:tasks.workunit.client.0.vm01.stdout:Lib-based ioengines dynamic no 2026-03-31T19:12:46.999 INFO:tasks.workunit.client.0.vm01.stdout:TCMalloc support yes 2026-03-31T19:12:47.000 INFO:tasks.workunit.client.0.vm01.stdout:seed_buckets 4 2026-03-31T19:12:47.001 INFO:tasks.workunit.client.0.vm01.stderr:+ make 2026-03-31T19:12:47.063 INFO:tasks.workunit.client.0.vm01.stderr:FIO_VERSION = fio-3.41-142-g698f 2026-03-31T19:12:47.135 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/crc16.o 2026-03-31T19:12:47.161 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/crc32.o 2026-03-31T19:12:47.186 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/crc32c-arm64.o 2026-03-31T19:12:47.233 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/crc32c-intel.o 2026-03-31T19:12:47.263 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/crc32c.o 2026-03-31T19:12:47.288 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/crc64.o 2026-03-31T19:12:47.312 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/crc7.o 2026-03-31T19:12:47.334 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/crct10dif_common.o 2026-03-31T19:12:47.356 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/fnv.o 2026-03-31T19:12:47.382 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/md5.o 2026-03-31T19:12:47.439 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/murmur3.o 2026-03-31T19:12:47.466 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/sha1.o 2026-03-31T19:12:47.584 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/sha256.o 2026-03-31T19:12:47.739 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/sha3.o 2026-03-31T19:12:47.824 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/sha512.o 2026-03-31T19:12:47.899 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/test.o 2026-03-31T19:12:48.009 INFO:tasks.workunit.client.0.vm01.stdout: CC crc/xxhash.o 2026-03-31T19:12:48.067 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/axmap.o 2026-03-31T19:12:48.147 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/bloom.o 2026-03-31T19:12:48.205 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/flist_sort.o 2026-03-31T19:12:48.254 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/gauss.o 2026-03-31T19:12:48.305 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/getrusage.o 2026-03-31T19:12:48.328 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/hweight.o 2026-03-31T19:12:48.352 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/ieee754.o 2026-03-31T19:12:48.382 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/lfsr.o 2026-03-31T19:12:48.441 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/memalign.o 2026-03-31T19:12:48.470 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/memcpy.o 2026-03-31T19:12:48.552 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/mountcheck.o 2026-03-31T19:12:48.579 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/num2str.o 2026-03-31T19:12:48.627 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/output_buffer.o 2026-03-31T19:12:48.659 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/pattern.o 2026-03-31T19:12:48.747 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/prio_tree.o 2026-03-31T19:12:48.818 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/rand.o 2026-03-31T19:12:48.898 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/rbtree.o 2026-03-31T19:12:48.971 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/strntol.o 2026-03-31T19:12:49.004 INFO:tasks.workunit.client.0.vm01.stdout: CC lib/zipf.o 2026-03-31T19:12:49.079 INFO:tasks.workunit.client.0.vm01.stdout: CC gettime.o 2026-03-31T19:12:49.263 INFO:tasks.workunit.client.0.vm01.stdout: CC ioengines.o 2026-03-31T19:12:49.417 INFO:tasks.workunit.client.0.vm01.stdout: CC init.o 2026-03-31T19:12:49.857 INFO:tasks.workunit.client.0.vm01.stdout: CC stat.o 2026-03-31T19:12:50.694 INFO:tasks.workunit.client.0.vm01.stdout: CC log.o 2026-03-31T19:12:50.777 INFO:tasks.workunit.client.0.vm01.stdout: CC time.o 2026-03-31T19:12:50.883 INFO:tasks.workunit.client.0.vm01.stdout: CC filesetup.o 2026-03-31T19:12:51.303 INFO:tasks.workunit.client.0.vm01.stdout: CC eta.o 2026-03-31T19:12:51.495 INFO:tasks.workunit.client.0.vm01.stdout: CC verify.o 2026-03-31T19:12:51.898 INFO:tasks.workunit.client.0.vm01.stdout: CC memory.o 2026-03-31T19:12:52.007 INFO:tasks.workunit.client.0.vm01.stdout: CC io_u.o 2026-03-31T19:12:52.452 INFO:tasks.workunit.client.0.vm01.stdout: CC parse.o 2026-03-31T19:12:52.710 INFO:tasks.workunit.client.0.vm01.stdout: CC fio_sem.o 2026-03-31T19:12:52.790 INFO:tasks.workunit.client.0.vm01.stdout: CC rwlock.o 2026-03-31T19:12:52.848 INFO:tasks.workunit.client.0.vm01.stdout: CC pshared.o 2026-03-31T19:12:52.895 INFO:tasks.workunit.client.0.vm01.stdout: CC options.o 2026-03-31T19:12:53.446 INFO:tasks.workunit.client.0.vm01.stdout: CC fio_shared_sem.o 2026-03-31T19:12:53.479 INFO:tasks.workunit.client.0.vm01.stdout: CC smalloc.o 2026-03-31T19:12:53.595 INFO:tasks.workunit.client.0.vm01.stdout: CC filehash.o 2026-03-31T19:12:53.699 INFO:tasks.workunit.client.0.vm01.stdout: CC profile.o 2026-03-31T19:12:53.779 INFO:tasks.workunit.client.0.vm01.stdout: CC debug.o 2026-03-31T19:12:53.809 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/cpu.o 2026-03-31T19:12:53.903 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/mmap.o 2026-03-31T19:12:54.005 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/sync.o 2026-03-31T19:12:54.141 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/null.o 2026-03-31T19:12:54.224 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/net.o 2026-03-31T19:12:54.469 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/ftruncate.o 2026-03-31T19:12:54.537 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/fileoperations.o 2026-03-31T19:12:54.639 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/exec.o 2026-03-31T19:12:54.736 INFO:tasks.workunit.client.0.vm01.stdout: CC server.o 2026-03-31T19:12:55.200 INFO:tasks.workunit.client.0.vm01.stdout: CC client.o 2026-03-31T19:12:55.537 INFO:tasks.workunit.client.0.vm01.stdout: CC iolog.o 2026-03-31T19:12:55.861 INFO:tasks.workunit.client.0.vm01.stdout: CC backend.o 2026-03-31T19:12:56.346 INFO:tasks.workunit.client.0.vm01.stdout: CC libfio.o 2026-03-31T19:12:56.450 INFO:tasks.workunit.client.0.vm01.stdout: CC flow.o 2026-03-31T19:12:56.526 INFO:tasks.workunit.client.0.vm01.stdout: CC cconv.o 2026-03-31T19:12:56.963 INFO:tasks.workunit.client.0.vm01.stdout: CC gettime-thread.o 2026-03-31T19:12:57.035 INFO:tasks.workunit.client.0.vm01.stdout: CC helpers.o 2026-03-31T19:12:57.078 INFO:tasks.workunit.client.0.vm01.stdout: CC json.o 2026-03-31T19:12:57.192 INFO:tasks.workunit.client.0.vm01.stdout: CC idletime.o 2026-03-31T19:12:57.338 INFO:tasks.workunit.client.0.vm01.stdout: CC td_error.o 2026-03-31T19:12:57.407 INFO:tasks.workunit.client.0.vm01.stdout: CC profiles/tiobench.o 2026-03-31T19:12:57.476 INFO:tasks.workunit.client.0.vm01.stdout: CC profiles/act.o 2026-03-31T19:12:57.580 INFO:tasks.workunit.client.0.vm01.stdout: CC io_u_queue.o 2026-03-31T19:12:57.613 INFO:tasks.workunit.client.0.vm01.stdout: CC filelock.o 2026-03-31T19:12:57.693 INFO:tasks.workunit.client.0.vm01.stdout: CC workqueue.o 2026-03-31T19:12:57.804 INFO:tasks.workunit.client.0.vm01.stdout: CC rate-submit.o 2026-03-31T19:12:57.902 INFO:tasks.workunit.client.0.vm01.stdout: CC optgroup.o 2026-03-31T19:12:57.934 INFO:tasks.workunit.client.0.vm01.stdout: CC helper_thread.o 2026-03-31T19:12:58.052 INFO:tasks.workunit.client.0.vm01.stdout: CC steadystate.o 2026-03-31T19:12:58.218 INFO:tasks.workunit.client.0.vm01.stdout: CC zone-dist.o 2026-03-31T19:12:58.313 INFO:tasks.workunit.client.0.vm01.stdout: CC zbd.o 2026-03-31T19:12:58.732 INFO:tasks.workunit.client.0.vm01.stdout: CC dedupe.o 2026-03-31T19:12:58.828 INFO:tasks.workunit.client.0.vm01.stdout: CC dataplacement.o 2026-03-31T19:12:58.945 INFO:tasks.workunit.client.0.vm01.stdout: CC sprandom.o 2026-03-31T19:12:59.142 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/posixaio.o 2026-03-31T19:12:59.230 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/falloc.o 2026-03-31T19:12:59.308 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/e4defrag.o 2026-03-31T19:12:59.399 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/splice.o 2026-03-31T19:12:59.512 INFO:tasks.workunit.client.0.vm01.stdout: CC oslib/asprintf.o 2026-03-31T19:12:59.537 INFO:tasks.workunit.client.0.vm01.stdout: CC oslib/strlcat.o 2026-03-31T19:12:59.567 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/mtd.o 2026-03-31T19:12:59.668 INFO:tasks.workunit.client.0.vm01.stdout: CC oslib/libmtd.o 2026-03-31T19:12:59.911 INFO:tasks.workunit.client.0.vm01.stdout: CC oslib/libmtd_legacy.o 2026-03-31T19:13:00.002 INFO:tasks.workunit.client.0.vm01.stdout: CC diskutil.o 2026-03-31T19:13:00.159 INFO:tasks.workunit.client.0.vm01.stdout: CC fifo.o 2026-03-31T19:13:00.200 INFO:tasks.workunit.client.0.vm01.stdout: CC blktrace.o 2026-03-31T19:13:00.394 INFO:tasks.workunit.client.0.vm01.stdout: CC cgroup.o 2026-03-31T19:13:00.493 INFO:tasks.workunit.client.0.vm01.stdout: CC trim.o 2026-03-31T19:13:00.568 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/sg.o 2026-03-31T19:13:00.790 INFO:tasks.workunit.client.0.vm01.stdout: CC oslib/linux-dev-lookup.o 2026-03-31T19:13:00.829 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/io_uring.o 2026-03-31T19:13:01.090 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/nvme.o 2026-03-31T19:13:01.275 INFO:tasks.workunit.client.0.vm01.stdout: CC oslib/linux-blkzoned.o 2026-03-31T19:13:01.379 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/cmdprio.o 2026-03-31T19:13:01.526 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/libaio.o 2026-03-31T19:13:01.637 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/rados.o 2026-03-31T19:13:01.753 INFO:tasks.workunit.client.0.vm01.stdout: CC engines/rbd.o 2026-03-31T19:13:01.870 INFO:tasks.workunit.client.0.vm01.stdout: CC fio.o 2026-03-31T19:13:01.931 INFO:tasks.workunit.client.0.vm01.stdout: LINK fio 2026-03-31T19:13:02.108 INFO:tasks.workunit.client.0.vm01.stdout: CC t/log.o 2026-03-31T19:13:02.137 INFO:tasks.workunit.client.0.vm01.stdout: CC t/genzipf.o 2026-03-31T19:13:02.221 INFO:tasks.workunit.client.0.vm01.stdout: CC oslib/strcasestr.o 2026-03-31T19:13:02.237 INFO:tasks.workunit.client.0.vm01.stdout: CC oslib/strndup.o 2026-03-31T19:13:02.251 INFO:tasks.workunit.client.0.vm01.stdout: LINK t/fio-genzipf 2026-03-31T19:13:02.280 INFO:tasks.workunit.client.0.vm01.stdout: CC t/btrace2fio.o 2026-03-31T19:13:02.510 INFO:tasks.workunit.client.0.vm01.stdout: LINK t/fio-btrace2fio 2026-03-31T19:13:02.539 INFO:tasks.workunit.client.0.vm01.stdout: CC t/dedupe.o 2026-03-31T19:13:02.680 INFO:tasks.workunit.client.0.vm01.stdout: CC t/debug.o 2026-03-31T19:13:02.704 INFO:tasks.workunit.client.0.vm01.stdout: CC t/arch.o 2026-03-31T19:13:02.724 INFO:tasks.workunit.client.0.vm01.stdout: LINK t/fio-dedupe 2026-03-31T19:13:02.756 INFO:tasks.workunit.client.0.vm01.stdout: CC t/verify-state.o 2026-03-31T19:13:02.821 INFO:tasks.workunit.client.0.vm01.stdout: LINK t/fio-verify-state 2026-03-31T19:13:02.849 INFO:tasks.workunit.client.0.vm01.stdout: CC t/stest.o 2026-03-31T19:13:02.899 INFO:tasks.workunit.client.0.vm01.stdout: LINK t/stest 2026-03-31T19:13:02.928 INFO:tasks.workunit.client.0.vm01.stdout: CC t/ieee754.o 2026-03-31T19:13:02.961 INFO:tasks.workunit.client.0.vm01.stdout: LINK t/ieee754 2026-03-31T19:13:02.978 INFO:tasks.workunit.client.0.vm01.stdout: CC t/axmap.o 2026-03-31T19:13:03.042 INFO:tasks.workunit.client.0.vm01.stdout: LINK t/axmap 2026-03-31T19:13:03.070 INFO:tasks.workunit.client.0.vm01.stdout: CC t/lfsr-test.o 2026-03-31T19:13:03.132 INFO:tasks.workunit.client.0.vm01.stdout: LINK t/lfsr-test 2026-03-31T19:13:03.163 INFO:tasks.workunit.client.0.vm01.stdout: CC t/gen-rand.o 2026-03-31T19:13:03.225 INFO:tasks.workunit.client.0.vm01.stdout: LINK t/gen-rand 2026-03-31T19:13:03.254 INFO:tasks.workunit.client.0.vm01.stdout: CC t/memlock.o 2026-03-31T19:13:03.295 INFO:tasks.workunit.client.0.vm01.stdout: LINK t/memlock 2026-03-31T19:13:03.324 INFO:tasks.workunit.client.0.vm01.stdout: CC t/read-to-pipe-async.o 2026-03-31T19:13:03.438 INFO:tasks.workunit.client.0.vm01.stdout: LINK t/read-to-pipe-async 2026-03-31T19:13:03.467 INFO:tasks.workunit.client.0.vm01.stdout: CC t/io_uring.o 2026-03-31T19:13:03.837 INFO:tasks.workunit.client.0.vm01.stdout: LINK t/io_uring 2026-03-31T19:13:03.879 INFO:tasks.workunit.client.0.vm01.stderr:+ sudo make install 2026-03-31T19:13:03.970 INFO:tasks.workunit.client.0.vm01.stdout:install -m 755 -d /usr/local/bin 2026-03-31T19:13:03.971 INFO:tasks.workunit.client.0.vm01.stdout:install fio t/fio-genzipf t/fio-btrace2fio t/fio-dedupe t/fio-verify-state ./tools/fio_generate_plots ./tools/plot/fio2gnuplot ./tools/genfio ./tools/fiologparser.py ./tools/hist/fiologparser_hist.py ./tools/hist/fio-histo-log-pctiles.py ./tools/fio_jsonplus_clat2csv /usr/local/bin 2026-03-31T19:13:03.977 INFO:tasks.workunit.client.0.vm01.stdout:install -m 755 -d /usr/local/share/man/man1 2026-03-31T19:13:03.978 INFO:tasks.workunit.client.0.vm01.stdout:install -m 644 ./fio.1 /usr/local/share/man/man1 2026-03-31T19:13:03.979 INFO:tasks.workunit.client.0.vm01.stdout:install -m 644 ./tools/fio_generate_plots.1 /usr/local/share/man/man1 2026-03-31T19:13:03.980 INFO:tasks.workunit.client.0.vm01.stdout:install -m 644 ./tools/plot/fio2gnuplot.1 /usr/local/share/man/man1 2026-03-31T19:13:03.981 INFO:tasks.workunit.client.0.vm01.stdout:install -m 644 ./tools/hist/fiologparser_hist.py.1 /usr/local/share/man/man1 2026-03-31T19:13:03.982 INFO:tasks.workunit.client.0.vm01.stdout:install -m 755 -d /usr/local/share/fio 2026-03-31T19:13:03.983 INFO:tasks.workunit.client.0.vm01.stdout:install -m 644 ./tools/plot/*gpm /usr/local/share/fio/ 2026-03-31T19:13:03.986 INFO:tasks.workunit.client.0.vm01.stdout:/home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-31T19:13:03.986 INFO:tasks.workunit.client.0.vm01.stderr:+ cd - 2026-03-31T19:13:03.986 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-31T19:13:13.988 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph config set osd osd_memory_target 939524096 2026-03-31T19:13:14.225 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph config set osd bluestore_onode_segment_size 0 2026-03-31T19:13:14.448 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd erasure-code-profile set myecprofile k=2 m=1 2026-03-31T19:13:15.576 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool create ecpool 16 16 erasure myecprofile 2026-03-31T19:13:17.573 INFO:tasks.workunit.client.0.vm01.stderr:pool 'ecpool' already exists 2026-03-31T19:13:17.588 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool set ecpool allow_ec_overwrites true 2026-03-31T19:13:19.533 INFO:tasks.workunit.client.0.vm01.stderr:set pool 3 allow_ec_overwrites to true 2026-03-31T19:13:19.546 INFO:tasks.workunit.client.0.vm01.stdout:[ec-esb-fio] Starting FIO test... 2026-03-31T19:13:19.547 INFO:tasks.workunit.client.0.vm01.stderr:+ trap cleanup EXIT INT TERM 2026-03-31T19:13:19.547 INFO:tasks.workunit.client.0.vm01.stderr:+ echo '[ec-esb-fio] Starting FIO test...' 2026-03-31T19:13:19.547 INFO:tasks.workunit.client.0.vm01.stderr:+ FIO_PID=23756 2026-03-31T19:13:19.547 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=test-ec-esb --ioengine=rados --pool=ecpool --clientname=admin --conf=/etc/ceph/ceph.conf --time_based=1 --runtime=1h --invalidate=0 --direct=1 --touch_objects=0 --iodepth=32 --numjobs=4 --rw=randwrite --file_service_type=pareto:0.20:0 --bssplit=4k/16:8k/10:12k/9:16k/8:20k/7:24k/7 --size=15G --nrfiles=12500 '--filename_format=stress_obj.$jobnum.$filenum' 2026-03-31T19:13:19.547 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph config dump 2026-03-31T19:13:19.547 INFO:tasks.workunit.client.0.vm01.stderr:+ grep bluestore_elastic_shared_blobs 2026-03-31T19:13:19.743 INFO:tasks.workunit.client.0.vm01.stdout:test-ec-esb: (g=0): rw=randwrite, bs=(R) 4096B-24.0KiB, (W) 4096B-24.0KiB, (T) 4096B-24.0KiB, ioengine=rados, iodepth=32 2026-03-31T19:13:19.743 INFO:tasks.workunit.client.0.vm01.stdout:... 2026-03-31T19:13:19.743 INFO:tasks.workunit.client.0.vm01.stdout:fio-3.41-142-g698f 2026-03-31T19:13:19.743 INFO:tasks.workunit.client.0.vm01.stdout:Starting 4 processes 2026-03-31T19:13:19.768 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:13:19.768 INFO:tasks.workunit.client.0.vm01.stderr:+ grep bluestore_onode_segment_size 2026-03-31T19:13:19.770 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph config dump 2026-03-31T19:13:20.049 INFO:tasks.workunit.client.0.vm01.stdout:osd advanced bluestore_onode_segment_size 0 2026-03-31T19:13:20.049 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd dump 2026-03-31T19:13:20.049 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 10 ecpool 2026-03-31T19:13:20.332 INFO:tasks.workunit.client.0.vm01.stdout:pool 3 'ecpool' erasure profile myecprofile size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode off last_change 27 flags hashpspool,ec_overwrites stripe_width 8192 2026-03-31T19:13:20.333 INFO:tasks.workunit.client.0.vm01.stdout:max_osd 6 2026-03-31T19:13:20.333 INFO:tasks.workunit.client.0.vm01.stdout:osd.0 up in weight 1 up_from 12 up_thru 25 down_at 0 last_clean_interval [0,0) v1:192.168.123.103:6804/1584285308 v1:192.168.123.103:6805/1584285308 exists,up 8e42da84-9085-490f-bcb2-12a7601715cc 2026-03-31T19:13:20.333 INFO:tasks.workunit.client.0.vm01.stdout:osd.1 up in weight 1 up_from 11 up_thru 25 down_at 0 last_clean_interval [0,0) v1:192.168.123.103:6800/2764498957 v1:192.168.123.103:6801/2764498957 exists,up c3e0761c-df20-4f27-aa02-67cba4a367a0 2026-03-31T19:13:20.333 INFO:tasks.workunit.client.0.vm01.stdout:osd.2 up in weight 1 up_from 11 up_thru 25 down_at 0 last_clean_interval [0,0) v1:192.168.123.105:6800/473978242 v1:192.168.123.105:6801/473978242 exists,up 33de56d5-999a-4940-9d6b-bd0e44b33124 2026-03-31T19:13:20.333 INFO:tasks.workunit.client.0.vm01.stdout:osd.3 up in weight 1 up_from 11 up_thru 25 down_at 0 last_clean_interval [0,0) v1:192.168.123.105:6804/457425259 v1:192.168.123.105:6805/457425259 exists,up 9fdc304f-7cea-4331-b3c0-c11b69079ac4 2026-03-31T19:13:20.333 INFO:tasks.workunit.client.0.vm01.stdout:osd.4 up in weight 1 up_from 11 up_thru 25 down_at 0 last_clean_interval [0,0) v1:192.168.123.106:6800/682239721 v1:192.168.123.106:6801/682239721 exists,up ead5f212-78f7-4fbe-971c-f5e7aafbfd46 2026-03-31T19:13:20.333 INFO:tasks.workunit.client.0.vm01.stdout:osd.5 up in weight 1 up_from 14 up_thru 25 down_at 0 last_clean_interval [0,0) v1:192.168.123.106:6804/146747963 v1:192.168.123.106:6805/146747963 exists,up 453e4d44-fda0-456d-bc9d-f74967039030 2026-03-31T19:13:20.333 INFO:tasks.workunit.client.0.vm01.stderr:+ TIMEOUT=3600 2026-03-31T19:13:20.333 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:13:20.334 INFO:tasks.workunit.client.0.vm01.stderr:+ START_TIME=1774984400 2026-03-31T19:13:20.334 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:13:20.334 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:13:20.335 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774984400 2026-03-31T19:13:20.335 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=0 2026-03-31T19:13:20.335 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 0 -ge 3600 ']' 2026-03-31T19:13:20.336 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:13:20.336 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:13:20.723 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_OK 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 98s) [leader: a] 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 96s) 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 91s), 6 in (since 97s) 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: objects: 4 objects, 449 KiB 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: usage: 159 MiB used, 540 GiB / 540 GiB avail 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 56.000% pgs unknown 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: 8.000% pgs not active 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: 14 unknown 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: 9 active+clean 2026-03-31T19:13:21.079 INFO:tasks.workunit.client.0.vm01.stdout: 2 creating+peering 2026-03-31T19:13:21.080 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:13:21.096 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:13:21.097 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 553, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 0.000333613, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 1028, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 5, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2679, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 1444, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1232, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 0, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 4134, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 3979, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 27, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 18788, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 209, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:13:21.225 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:13:21.226 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:13:21.226 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:13:21.226 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:13:21.226 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:14:21.226 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:14:21.227 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:14:21.228 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774984461 2026-03-31T19:14:21.228 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=61 2026-03-31T19:14:21.228 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 61 -ge 3600 ']' 2026-03-31T19:14:21.228 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:14:21.228 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:14:21.547 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_OK 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 2m) [leader: a] 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 2m) 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 2m), 6 in (since 2m) 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: objects: 19.45k objects, 14 GiB 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: usage: 12 GiB used, 528 GiB / 540 GiB avail 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: client: 5.8 MiB/s wr, 0 op/s rd, 760 op/s wr 2026-03-31T19:14:21.842 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:14:21.856 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:14:21.856 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 26841, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 0.136584551, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 4994, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 1, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 109238, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 32981, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 73973, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 95, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 26682, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 20174, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 47, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 100708, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 12793, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:14:21.972 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:14:21.973 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:14:21.973 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:15:21.974 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:15:21.974 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:15:21.975 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774984521 2026-03-31T19:15:21.975 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=121 2026-03-31T19:15:21.975 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 121 -ge 3600 ']' 2026-03-31T19:15:21.975 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:15:21.975 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:15:22.386 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_OK 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 3m) [leader: a] 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 3m) 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 3m), 6 in (since 3m) 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: objects: 26.03k objects, 19 GiB 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: usage: 18 GiB used, 522 GiB / 540 GiB avail 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:15:22.708 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.4 MiB/s wr, 0 op/s rd, 598 op/s wr 2026-03-31T19:15:22.709 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:15:22.723 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:15:22.723 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 46265, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 0.390261955, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 4633, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 1, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 183865, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 53649, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 131257, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 674, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 30749, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 20974, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 35, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 51556, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 23308, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:15:22.833 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:15:22.834 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:15:22.834 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:16:22.834 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:16:22.834 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:16:22.835 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774984582 2026-03-31T19:16:22.835 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=182 2026-03-31T19:16:22.835 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 182 -ge 3600 ']' 2026-03-31T19:16:22.835 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:16:22.835 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:16:23.152 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_OK 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 4m) [leader: a] 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 4m) 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 4m), 6 in (since 4m) 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: objects: 30.66k objects, 23 GiB 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: usage: 17 GiB used, 523 GiB / 540 GiB avail 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:16:23.480 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:16:23.481 INFO:tasks.workunit.client.0.vm01.stdout: client: 5.9 MiB/s wr, 0 op/s rd, 800 op/s wr 2026-03-31T19:16:23.481 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:16:23.494 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:16:23.494 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 65005, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 0.736373097, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 4403, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 254564, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 73074, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 187042, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 1809, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 33490, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 21517, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 28, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 22884, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 33651, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:16:23.599 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:17:23.600 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:17:23.600 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:17:23.600 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774984643 2026-03-31T19:17:23.600 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=243 2026-03-31T19:17:23.600 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 243 -ge 3600 ']' 2026-03-31T19:17:23.601 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:17:23.601 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:17:23.936 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:17:24.228 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_OK 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 5m) [leader: a] 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 5m) 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 5m), 6 in (since 5m) 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: objects: 33.62k objects, 26 GiB 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: usage: 20 GiB used, 520 GiB / 540 GiB avail 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.5 MiB/s wr, 0 op/s rd, 615 op/s wr 2026-03-31T19:17:24.229 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:17:24.241 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:17:24.241 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 82677, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 1.148496288, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 4402, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 1, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 320019, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 90831, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 240852, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 3338, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 35267, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 21736, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 34, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 47460, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 43786, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:17:24.348 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:18:24.348 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:18:24.349 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:18:24.350 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774984704 2026-03-31T19:18:24.350 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=304 2026-03-31T19:18:24.350 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 304 -ge 3600 ']' 2026-03-31T19:18:24.350 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:18:24.350 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:18:24.673 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 6m) [leader: a] 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 6m) 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 6m), 6 in (since 6m) 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: objects: 36.07k objects, 29 GiB 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: usage: 22 GiB used, 518 GiB / 540 GiB avail 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.2 MiB/s wr, 0 op/s rd, 570 op/s wr 2026-03-31T19:18:24.967 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:18:24.981 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:18:24.981 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 100563, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 1.606390436, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 4319, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 385333, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 108494, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 296935, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 5247, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 36841, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 21747, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 28, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 22884, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 54494, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:18:25.084 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:18:25.085 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:18:25.085 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:18:25.085 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:18:25.085 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:19:25.086 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:19:25.086 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:19:25.086 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774984765 2026-03-31T19:19:25.087 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=365 2026-03-31T19:19:25.087 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 365 -ge 3600 ']' 2026-03-31T19:19:25.087 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:19:25.087 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:19:25.403 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 7m) [leader: a] 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 7m) 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 7m), 6 in (since 7m) 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: objects: 38.10k objects, 31 GiB 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: usage: 24 GiB used, 516 GiB / 540 GiB avail 2026-03-31T19:19:25.717 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:19:25.718 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:19:25.718 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:19:25.718 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.7 MiB/s wr, 0 op/s rd, 637 op/s wr 2026-03-31T19:19:25.718 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:19:25.732 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:19:25.732 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 117971, 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 2.162486724, 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 4136, 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 448021, 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 125415, 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 350749, 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 7469, 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 38710, 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 22136, 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:19:25.845 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 31, 2026-03-31T19:19:25.846 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 35172, 2026-03-31T19:19:25.846 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:19:25.846 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 64848, 2026-03-31T19:19:25.846 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:19:25.846 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:19:25.846 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:19:25.846 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:19:25.846 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:19:25.846 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:19:25.846 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:20:25.847 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:20:25.847 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:20:25.848 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774984825 2026-03-31T19:20:25.848 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=425 2026-03-31T19:20:25.848 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 425 -ge 3600 ']' 2026-03-31T19:20:25.848 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:20:25.848 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:20:26.158 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:20:26.444 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 8m) [leader: a] 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 8m) 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 8m), 6 in (since 8m) 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: objects: 39.70k objects, 33 GiB 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: usage: 26 GiB used, 514 GiB / 540 GiB avail 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: client: 5.2 MiB/s wr, 0 op/s rd, 715 op/s wr 2026-03-31T19:20:26.445 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:20:26.459 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:20:26.459 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 135523, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 2.743111377, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3996, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 511215, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 142560, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 405163, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 10024, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 40330, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 22563, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 34, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 51556, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 75439, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:20:26.565 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:21:26.566 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:21:26.566 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:21:26.567 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774984886 2026-03-31T19:21:26.567 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=486 2026-03-31T19:21:26.567 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 486 -ge 3600 ']' 2026-03-31T19:21:26.567 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:21:26.567 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:21:26.890 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:21:27.200 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:21:27.200 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:21:27.200 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:21:27.200 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:21:27.200 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:21:27.200 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:21:27.200 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 9m) [leader: a] 2026-03-31T19:21:27.200 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 9m) 2026-03-31T19:21:27.200 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 9m), 6 in (since 9m) 2026-03-31T19:21:27.201 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:21:27.201 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:21:27.201 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:21:27.201 INFO:tasks.workunit.client.0.vm01.stdout: objects: 41.04k objects, 34 GiB 2026-03-31T19:21:27.201 INFO:tasks.workunit.client.0.vm01.stdout: usage: 25 GiB used, 515 GiB / 540 GiB avail 2026-03-31T19:21:27.201 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:21:27.201 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:21:27.201 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:21:27.201 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.7 MiB/s wr, 0 op/s rd, 502 op/s wr 2026-03-31T19:21:27.201 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:21:27.214 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:21:27.214 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 153525, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 3.368158923, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3912, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 575668, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 159891, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 461457, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 12917, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 41236, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 22717, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 33, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 43364, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 86473, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:21:27.313 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:22:27.314 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:22:27.315 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:22:27.315 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774984947 2026-03-31T19:22:27.316 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=547 2026-03-31T19:22:27.316 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 547 -ge 3600 ']' 2026-03-31T19:22:27.316 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:22:27.316 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:22:27.630 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:22:27.936 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:22:27.936 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 10m) [leader: a] 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 10m) 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 10m), 6 in (since 10m) 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: objects: 42.18k objects, 36 GiB 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: usage: 27 GiB used, 513 GiB / 540 GiB avail 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.8 MiB/s wr, 0 op/s rd, 525 op/s wr 2026-03-31T19:22:27.937 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:22:27.951 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:22:27.951 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:22:31.065 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:22:31.065 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 172057, 2026-03-31T19:22:31.065 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 4.040769567, 2026-03-31T19:22:31.065 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:22:31.065 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3924, 2026-03-31T19:22:31.065 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:22:31.065 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 641737, 2026-03-31T19:22:31.065 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 177695, 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 520250, 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 16102, 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 41829, 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 22529, 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 28, 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 22884, 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 98078, 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:22:31.066 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:23:31.067 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:23:31.067 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:23:31.068 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985011 2026-03-31T19:23:31.068 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=611 2026-03-31T19:23:31.068 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 611 -ge 3600 ']' 2026-03-31T19:23:31.068 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:23:31.068 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:23:31.385 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 11m) [leader: a] 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 11m) 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 11m), 6 in (since 11m) 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: objects: 43.22k objects, 37 GiB 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: usage: 30 GiB used, 510 GiB / 540 GiB avail 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: client: 5.2 MiB/s wr, 0 op/s rd, 711 op/s wr 2026-03-31T19:23:31.670 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:23:31.684 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:23:31.684 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 190127, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 4.752576554, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3802, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 2, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 705657, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 195041, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 576467, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 19660, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 43047, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 22897, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 31, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 35172, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 109237, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:23:31.776 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:24:31.777 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:24:31.777 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:24:31.778 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985071 2026-03-31T19:24:31.778 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=671 2026-03-31T19:24:31.778 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 671 -ge 3600 ']' 2026-03-31T19:24:31.778 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:24:31.779 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:24:32.081 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 12m) [leader: a] 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 12m) 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 12m), 6 in (since 12m) 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: objects: 44.10k objects, 38 GiB 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: usage: 31 GiB used, 509 GiB / 540 GiB avail 2026-03-31T19:24:32.362 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:24:32.363 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:24:32.363 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:24:32.363 INFO:tasks.workunit.client.0.vm01.stdout: client: 5.1 MiB/s wr, 0 op/s rd, 706 op/s wr 2026-03-31T19:24:32.363 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:24:32.376 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:24:32.376 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 209735, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 5.479248529, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3802, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 774922, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 213645, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 639526, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 23671, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 44440, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23109, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 34, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 47460, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 121722, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:24:32.472 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:24:32.473 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:24:32.473 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:24:32.473 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:24:32.473 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:25:32.473 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:25:32.474 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:25:32.474 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985132 2026-03-31T19:25:32.475 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=732 2026-03-31T19:25:32.475 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 732 -ge 3600 ']' 2026-03-31T19:25:32.475 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:25:32.475 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:25:32.793 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 13m) [leader: a] 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 13m) 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 13m), 6 in (since 13m) 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: objects: 44.82k objects, 39 GiB 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: usage: 32 GiB used, 508 GiB / 540 GiB avail 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.5 MiB/s wr, 0 op/s rd, 490 op/s wr 2026-03-31T19:25:33.069 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:25:33.082 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:25:33.082 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:25:33.178 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 227871, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 6.300287724, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3796, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 838623, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 230755, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 697700, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 27514, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 44896, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23098, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 40, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 76132, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 133196, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:25:33.179 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:26:33.180 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:26:33.180 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:26:33.181 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985193 2026-03-31T19:26:33.181 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=793 2026-03-31T19:26:33.181 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 793 -ge 3600 ']' 2026-03-31T19:26:33.181 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:26:33.181 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:26:33.488 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 14m) [leader: a] 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 14m) 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 14m), 6 in (since 14m) 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: objects: 45.50k objects, 41 GiB 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: usage: 32 GiB used, 508 GiB / 540 GiB avail 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.9 MiB/s wr, 0 op/s rd, 544 op/s wr 2026-03-31T19:26:33.771 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:26:33.784 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:26:33.784 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 248153, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 7.169256416, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3712, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 909899, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 249981, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 762562, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 32124, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 45233, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23168, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 31, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 35172, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 146129, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:26:33.886 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:26:33.887 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:26:33.887 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:26:33.887 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:26:33.887 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:27:33.888 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:27:33.888 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:27:33.889 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985253 2026-03-31T19:27:33.889 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=853 2026-03-31T19:27:33.889 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 853 -ge 3600 ']' 2026-03-31T19:27:33.889 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:27:33.889 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:27:34.213 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:27:34.519 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 15m) [leader: a] 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 15m) 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 15m), 6 in (since 15m) 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: objects: 46.05k objects, 41 GiB 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: usage: 32 GiB used, 508 GiB / 540 GiB avail 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.6 MiB/s wr, 0 op/s rd, 647 op/s wr 2026-03-31T19:27:34.520 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:27:34.533 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:27:34.533 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 268387, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 8.070964523, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3676, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 980175, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 269034, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 828058, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 36917, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 46405, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23292, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 39, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 67940, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 159199, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:27:34.629 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:28:34.630 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:28:34.630 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:28:34.631 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985314 2026-03-31T19:28:34.631 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=914 2026-03-31T19:28:34.631 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 914 -ge 3600 ']' 2026-03-31T19:28:34.631 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:28:34.631 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:28:34.933 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:28:35.210 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:28:35.210 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:28:35.210 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:28:35.210 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:28:35.210 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:28:35.210 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:28:35.210 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 16m) [leader: a] 2026-03-31T19:28:35.210 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 16m) 2026-03-31T19:28:35.210 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 16m), 6 in (since 16m) 2026-03-31T19:28:35.210 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:28:35.210 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:28:35.210 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:28:35.211 INFO:tasks.workunit.client.0.vm01.stdout: objects: 46.55k objects, 42 GiB 2026-03-31T19:28:35.211 INFO:tasks.workunit.client.0.vm01.stdout: usage: 33 GiB used, 507 GiB / 540 GiB avail 2026-03-31T19:28:35.211 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:28:35.211 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:28:35.211 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:28:35.211 INFO:tasks.workunit.client.0.vm01.stdout: client: 5.3 MiB/s wr, 0 op/s rd, 747 op/s wr 2026-03-31T19:28:35.211 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:28:35.224 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:28:35.224 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 288595, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 8.967811911, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3647, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1050867, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 288154, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 894513, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 42017, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 47657, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23387, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 31, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 35172, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 172690, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:28:35.325 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:28:35.326 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:29:35.327 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:29:35.327 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:29:35.327 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985375 2026-03-31T19:29:35.328 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=975 2026-03-31T19:29:35.328 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 975 -ge 3600 ']' 2026-03-31T19:29:35.328 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:29:35.328 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:29:35.631 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:29:35.918 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:29:35.918 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:29:35.918 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:29:35.918 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:29:35.918 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 17m) [leader: a] 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 17m) 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 17m), 6 in (since 17m) 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: objects: 46.98k objects, 43 GiB 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: usage: 33 GiB used, 507 GiB / 540 GiB avail 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.8 MiB/s wr, 0 op/s rd, 528 op/s wr 2026-03-31T19:29:35.919 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:29:35.934 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:29:35.934 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 309397, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 10.038116084, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3666, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 1, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1123373, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 307808, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 963237, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 47593, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 47297, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23387, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 38, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 63844, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 186582, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:29:36.031 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:30:36.032 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:30:36.033 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:30:36.033 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985436 2026-03-31T19:30:36.033 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1036 2026-03-31T19:30:36.033 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1036 -ge 3600 ']' 2026-03-31T19:30:36.033 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:30:36.033 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:30:36.334 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 18m) [leader: a] 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 18m) 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 18m), 6 in (since 18m) 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: objects: 47.39k objects, 44 GiB 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: usage: 34 GiB used, 506 GiB / 540 GiB avail 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.9 MiB/s wr, 0 op/s rd, 553 op/s wr 2026-03-31T19:30:36.623 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:30:36.636 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:30:36.636 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 331405, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 11.066843614, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3580, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1199615, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 328398, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1035450, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 53719, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 49203, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23710, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 35, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 55652, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 201229, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:30:36.731 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:30:36.732 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:31:36.733 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:31:36.733 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:31:36.733 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985496 2026-03-31T19:31:36.734 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1096 2026-03-31T19:31:36.734 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1096 -ge 3600 ']' 2026-03-31T19:31:36.734 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:31:36.734 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:31:37.037 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 19m) [leader: a] 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 19m) 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 19m), 6 in (since 19m) 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: objects: 47.72k objects, 45 GiB 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: usage: 35 GiB used, 505 GiB / 540 GiB avail 2026-03-31T19:31:37.314 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:31:37.315 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:31:37.315 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:31:37.315 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.9 MiB/s wr, 0 op/s rd, 543 op/s wr 2026-03-31T19:31:37.315 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:31:37.327 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:31:37.328 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 352689, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 12.153114191, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3610, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1273308, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 348318, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1105753, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 59675, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 48352, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23455, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 27, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 18788, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 215510, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:31:37.425 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:31:37.426 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:32:37.426 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:32:37.426 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:32:37.427 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985557 2026-03-31T19:32:37.427 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1157 2026-03-31T19:32:37.427 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1157 -ge 3600 ']' 2026-03-31T19:32:37.427 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:32:37.427 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:32:37.736 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 20m) [leader: a] 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 20m) 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 20m), 6 in (since 20m) 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: objects: 47.95k objects, 45 GiB 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: usage: 33 GiB used, 507 GiB / 540 GiB avail 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: client: 2.9 MiB/s wr, 0 op/s rd, 403 op/s wr 2026-03-31T19:32:38.020 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:32:38.033 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:32:38.033 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 369421, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 12.925424554, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3534, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1331087, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 364078, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1162019, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 64717, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 49310, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23781, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 30, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 31076, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 226874, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:32:38.133 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:32:38.134 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:32:38.134 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:32:38.134 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:32:38.134 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:32:38.134 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:33:38.135 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:33:38.135 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:33:38.136 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985618 2026-03-31T19:33:38.136 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1218 2026-03-31T19:33:38.136 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1218 -ge 3600 ']' 2026-03-31T19:33:38.136 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:33:38.136 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:33:38.453 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:33:38.729 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 21m) [leader: a] 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 21m) 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 21m), 6 in (since 21m) 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: objects: 48.21k objects, 46 GiB 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: usage: 35 GiB used, 505 GiB / 540 GiB avail 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.7 MiB/s wr, 0 op/s rd, 523 op/s wr 2026-03-31T19:33:38.730 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:33:38.743 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:33:38.743 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 388415, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 13.789072635, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3437, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1397126, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 382030, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1225324, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 70478, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 50498, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23960, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 37, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 63844, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 239823, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:33:38.846 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:33:38.847 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:33:38.847 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:33:38.847 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:33:38.847 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:34:38.848 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:34:38.848 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:34:38.849 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985678 2026-03-31T19:34:38.849 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1278 2026-03-31T19:34:38.849 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1278 -ge 3600 ']' 2026-03-31T19:34:38.849 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:34:38.849 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:34:39.142 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:34:39.431 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 22m) [leader: a] 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 22m) 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 22m), 6 in (since 22m) 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: objects: 48.45k objects, 46 GiB 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: usage: 37 GiB used, 503 GiB / 540 GiB avail 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.0 MiB/s wr, 0 op/s rd, 565 op/s wr 2026-03-31T19:34:39.432 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:34:39.445 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:34:39.445 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:34:39.551 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:34:39.551 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 409239, 2026-03-31T19:34:39.551 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 14.746131351, 2026-03-31T19:34:39.551 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:34:39.551 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3455, 2026-03-31T19:34:39.551 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 1, 2026-03-31T19:34:39.551 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1469822, 2026-03-31T19:34:39.551 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 401681, 2026-03-31T19:34:39.551 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1294981, 2026-03-31T19:34:39.551 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 77000, 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 50682, 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23825, 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 28, 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 22884, 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 254069, 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:34:39.552 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:35:39.553 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:35:39.553 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:35:39.554 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985739 2026-03-31T19:35:39.554 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1339 2026-03-31T19:35:39.554 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1339 -ge 3600 ']' 2026-03-31T19:35:39.554 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:35:39.554 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:35:39.857 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 23m) [leader: a] 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 23m) 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 23m), 6 in (since 23m) 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: objects: 48.65k objects, 47 GiB 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: usage: 38 GiB used, 502 GiB / 540 GiB avail 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.8 MiB/s wr, 0 op/s rd, 677 op/s wr 2026-03-31T19:35:40.138 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:35:40.151 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:35:40.151 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:35:40.249 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 430045, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 15.762053528, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3479, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 1, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1541627, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 421237, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1365126, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 83590, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 50727, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23907, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 32, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 39268, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 268635, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:35:40.250 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:36:40.251 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:36:40.251 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:36:40.252 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985800 2026-03-31T19:36:40.252 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1400 2026-03-31T19:36:40.252 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1400 -ge 3600 ']' 2026-03-31T19:36:40.252 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:36:40.252 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:36:40.559 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:36:40.844 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:36:40.844 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:36:40.844 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:36:40.844 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:36:40.844 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:36:40.844 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:36:40.844 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 24m) [leader: a] 2026-03-31T19:36:40.844 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 24m) 2026-03-31T19:36:40.844 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 24m), 6 in (since 24m) 2026-03-31T19:36:40.844 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:36:40.844 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:36:40.844 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:36:40.845 INFO:tasks.workunit.client.0.vm01.stdout: objects: 48.82k objects, 47 GiB 2026-03-31T19:36:40.845 INFO:tasks.workunit.client.0.vm01.stdout: usage: 38 GiB used, 502 GiB / 540 GiB avail 2026-03-31T19:36:40.845 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:36:40.845 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:36:40.845 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:36:40.845 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.1 MiB/s wr, 0 op/s rd, 580 op/s wr 2026-03-31T19:36:40.845 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:36:40.860 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:36:40.860 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 451669, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 16.941080698, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3452, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1615359, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 441198, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1437603, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 90395, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 51650, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23885, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 27, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 18788, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 283551, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:36:40.958 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:36:40.959 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:37:26.919 INFO:tasks.ceph.osd.3.vm05.stderr:problem writing to /var/log/ceph/ceph-osd.3.log: (28) No space left on device 2026-03-31T19:37:26.919 INFO:tasks.ceph.osd.2.vm05.stderr:problem writing to /var/log/ceph/ceph-osd.2.log: (28) No space left on device 2026-03-31T19:37:40.959 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:37:40.959 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:37:40.960 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985860 2026-03-31T19:37:40.960 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1460 2026-03-31T19:37:40.960 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1460 -ge 3600 ']' 2026-03-31T19:37:40.960 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:37:40.960 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:37:41.282 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 25m) [leader: a] 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 25m) 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 25m), 6 in (since 25m) 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: objects: 48.97k objects, 48 GiB 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: usage: 38 GiB used, 502 GiB / 540 GiB avail 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.8 MiB/s wr, 0 op/s rd, 534 op/s wr 2026-03-31T19:37:41.580 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:37:41.593 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:37:41.593 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 472633, 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 18.060805113, 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3469, 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 3, 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1687491, 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 460847, 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1509112, 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 97389, 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 51289, 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 23903, 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 28, 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 26980, 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:37:41.701 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 298340, 2026-03-31T19:37:41.702 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:37:41.702 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:37:41.702 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:37:41.702 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:37:41.702 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:37:41.702 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:37:41.702 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:38:41.702 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:38:41.702 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:38:41.703 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985921 2026-03-31T19:38:41.703 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1521 2026-03-31T19:38:41.703 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1521 -ge 3600 ']' 2026-03-31T19:38:41.703 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:38:41.704 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:38:42.025 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 26m) [leader: a] 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 26m) 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 26m), 6 in (since 26m) 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.09k objects, 48 GiB 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: usage: 38 GiB used, 502 GiB / 540 GiB avail 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.5 MiB/s wr, 0 op/s rd, 637 op/s wr 2026-03-31T19:38:42.337 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:38:42.350 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:38:42.350 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:38:42.450 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 492091, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 19.144094706, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3358, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1754565, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 479157, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1575527, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 104310, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 52387, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24075, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 30, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 31076, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 312129, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:38:42.451 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:39:42.452 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:39:42.452 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:39:42.453 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774985982 2026-03-31T19:39:42.453 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1582 2026-03-31T19:39:42.453 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1582 -ge 3600 ']' 2026-03-31T19:39:42.453 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:39:42.453 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:39:42.774 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:39:43.071 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:39:43.072 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:39:43.072 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:39:43.072 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:39:43.072 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:39:43.072 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:39:43.072 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 28m) [leader: a] 2026-03-31T19:39:43.072 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 27m) 2026-03-31T19:39:43.072 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 27m), 6 in (since 27m) 2026-03-31T19:39:43.072 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:39:43.072 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:39:43.072 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:39:43.072 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.18k objects, 49 GiB 2026-03-31T19:39:43.072 INFO:tasks.workunit.client.0.vm01.stdout: usage: 38 GiB used, 502 GiB / 540 GiB avail 2026-03-31T19:39:43.073 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:39:43.073 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:39:43.073 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:39:43.073 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.5 MiB/s wr, 0 op/s rd, 636 op/s wr 2026-03-31T19:39:43.073 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:39:43.086 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:39:43.087 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 509883, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 20.167125842, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3336, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 3, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1815869, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 495930, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1636561, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 110640, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 52677, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24215, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 33, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 47460, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 324885, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:39:43.186 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:40:43.187 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:40:43.187 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:40:43.188 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986043 2026-03-31T19:40:43.188 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1643 2026-03-31T19:40:43.188 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1643 -ge 3600 ']' 2026-03-31T19:40:43.188 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:40:43.188 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:40:43.495 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 29m) [leader: a] 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 28m) 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 28m), 6 in (since 29m) 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.27k objects, 49 GiB 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: usage: 38 GiB used, 502 GiB / 540 GiB avail 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:40:43.792 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.7 MiB/s wr, 0 op/s rd, 518 op/s wr 2026-03-31T19:40:43.793 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:40:43.806 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:40:43.806 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:40:43.914 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 529223, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 21.165134107, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3331, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 1, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1882860, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 514075, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1703702, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 117664, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 53050, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24140, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 34, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 47460, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 338903, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:40:43.915 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:41:43.916 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:41:43.916 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:41:43.917 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986103 2026-03-31T19:41:43.917 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1703 2026-03-31T19:41:43.917 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1703 -ge 3600 ']' 2026-03-31T19:41:43.917 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:41:43.917 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:41:44.228 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 30m) [leader: a] 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 29m) 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 29m), 6 in (since 30m) 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.36k objects, 49 GiB 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: usage: 38 GiB used, 502 GiB / 540 GiB avail 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:41:44.523 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.6 MiB/s wr, 0 op/s rd, 499 op/s wr 2026-03-31T19:41:44.524 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:41:44.536 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:41:44.537 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:41:44.645 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:41:44.645 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 548247, 2026-03-31T19:41:44.645 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 22.244903844, 2026-03-31T19:41:44.645 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:41:44.645 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3289, 2026-03-31T19:41:44.645 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:41:44.645 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 1949255, 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 532319, 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1770174, 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 124779, 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 53611, 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24110, 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 27, 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 18788, 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 352882, 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:41:44.646 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:42:44.629 INFO:tasks.ceph.osd.5.vm06.stderr:problem writing to /var/log/ceph/ceph-osd.5.log: (28) No space left on device 2026-03-31T19:42:44.630 INFO:tasks.ceph.osd.4.vm06.stderr:problem writing to /var/log/ceph/ceph-osd.4.log: (28) No space left on device 2026-03-31T19:42:44.647 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:42:44.647 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:42:44.648 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986164 2026-03-31T19:42:44.648 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1764 2026-03-31T19:42:44.648 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1764 -ge 3600 ']' 2026-03-31T19:42:44.648 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:42:44.648 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:42:44.960 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 31m) [leader: a] 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 31m) 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 30m), 6 in (since 31m) 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.42k objects, 50 GiB 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: usage: 40 GiB used, 500 GiB / 540 GiB avail 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.2 MiB/s wr, 0 op/s rd, 464 op/s wr 2026-03-31T19:42:45.268 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:42:45.283 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:42:45.283 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 568105, 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 23.354525090, 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3309, 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 1, 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2017401, 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 550935, 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1839120, 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 132237, 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 53204, 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24158, 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 35, 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 51556, 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:42:45.396 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 367278, 2026-03-31T19:42:45.397 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:42:45.397 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:42:45.397 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:42:45.397 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:42:45.397 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:42:45.397 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:42:45.397 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:43:45.398 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:43:45.398 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:43:45.399 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986225 2026-03-31T19:43:45.399 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1825 2026-03-31T19:43:45.399 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1825 -ge 3600 ']' 2026-03-31T19:43:45.399 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:43:45.399 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:43:45.727 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:43:46.037 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:43:46.037 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:43:46.037 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:43:46.037 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:43:46.037 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:43:46.037 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:43:46.037 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 32m) [leader: a] 2026-03-31T19:43:46.037 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 32m) 2026-03-31T19:43:46.038 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 31m), 6 in (since 32m) 2026-03-31T19:43:46.038 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:43:46.038 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:43:46.038 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:43:46.038 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.48k objects, 50 GiB 2026-03-31T19:43:46.038 INFO:tasks.workunit.client.0.vm01.stdout: usage: 39 GiB used, 501 GiB / 540 GiB avail 2026-03-31T19:43:46.038 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:43:46.038 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:43:46.038 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:43:46.038 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.5 MiB/s wr, 0 op/s rd, 494 op/s wr 2026-03-31T19:43:46.038 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:43:46.052 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:43:46.052 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:43:46.155 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:43:46.155 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 587625, 2026-03-31T19:43:46.155 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 24.494564345, 2026-03-31T19:43:46.155 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:43:46.155 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3310, 2026-03-31T19:43:46.155 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:43:46.155 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2084770, 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 569431, 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1906503, 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 139836, 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 53972, 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24302, 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 28, 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 22884, 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 381457, 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:43:46.156 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:44:06.951 INFO:tasks.ceph.osd.0.vm03.stderr:problem writing to /var/log/ceph/ceph-osd.0.log: (28) No space left on device 2026-03-31T19:44:06.951 INFO:tasks.ceph.osd.1.vm03.stderr:problem writing to /var/log/ceph/ceph-osd.1.log: (28) No space left on device 2026-03-31T19:44:46.157 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:44:46.157 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:44:46.158 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986286 2026-03-31T19:44:46.158 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1886 2026-03-31T19:44:46.158 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1886 -ge 3600 ']' 2026-03-31T19:44:46.158 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:44:46.158 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:44:46.471 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 33m) [leader: a] 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 33m) 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 32m), 6 in (since 33m) 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.52k objects, 50 GiB 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: usage: 40 GiB used, 500 GiB / 540 GiB avail 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.6 MiB/s wr, 0 op/s rd, 512 op/s wr 2026-03-31T19:44:46.759 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:44:46.773 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:44:46.773 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 606909, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 25.549096420, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3316, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2150843, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 587408, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 1973717, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 147084, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 54273, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24226, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 29, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 26980, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 395621, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:44:46.879 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:44:46.880 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:44:46.880 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:44:46.880 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:44:46.880 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:44:46.880 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:45:46.880 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:45:46.881 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:45:46.881 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986346 2026-03-31T19:45:46.881 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=1946 2026-03-31T19:45:46.881 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 1946 -ge 3600 ']' 2026-03-31T19:45:46.882 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:45:46.882 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:45:47.203 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 34m) [leader: a] 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 34m) 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 33m), 6 in (since 34m) 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.57k objects, 51 GiB 2026-03-31T19:45:47.485 INFO:tasks.workunit.client.0.vm01.stdout: usage: 40 GiB used, 500 GiB / 540 GiB avail 2026-03-31T19:45:47.486 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:45:47.486 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:45:47.486 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:45:47.486 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.4 MiB/s wr, 0 op/s rd, 497 op/s wr 2026-03-31T19:45:47.486 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:45:47.499 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:45:47.499 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 626565, 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 26.599161016, 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3221, 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2218300, 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 605685, 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2043453, 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 154520, 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 55274, 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24307, 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 51, 2026-03-31T19:45:47.598 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 117092, 2026-03-31T19:45:47.599 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:45:47.599 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 410290, 2026-03-31T19:45:47.599 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:45:47.599 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:45:47.599 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:45:47.599 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:45:47.599 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:45:47.599 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:45:47.599 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:46:47.600 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:46:47.600 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:46:47.600 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986407 2026-03-31T19:46:47.600 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2007 2026-03-31T19:46:47.600 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2007 -ge 3600 ']' 2026-03-31T19:46:47.601 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:46:47.601 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:46:47.908 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 35m) [leader: a] 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 35m) 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 34m), 6 in (since 35m) 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.62k objects, 51 GiB 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: usage: 40 GiB used, 500 GiB / 540 GiB avail 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:46:48.205 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:46:48.206 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.5 MiB/s wr, 0 op/s rd, 496 op/s wr 2026-03-31T19:46:48.206 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:46:48.219 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:46:48.219 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:46:48.328 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:46:48.328 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 646957, 2026-03-31T19:46:48.328 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 27.791419194, 2026-03-31T19:46:48.328 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:46:48.328 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3270, 2026-03-31T19:46:48.328 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 2, 2026-03-31T19:46:48.328 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2287427, 2026-03-31T19:46:48.328 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 624587, 2026-03-31T19:46:48.328 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2115619, 2026-03-31T19:46:48.328 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 162549, 2026-03-31T19:46:48.328 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 55618, 2026-03-31T19:46:48.328 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24415, 2026-03-31T19:46:48.329 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:46:48.329 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 60, 2026-03-31T19:46:48.329 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 153956, 2026-03-31T19:46:48.329 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:46:48.329 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 425350, 2026-03-31T19:46:48.329 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:46:48.329 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:46:48.329 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:46:48.329 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:46:48.329 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:46:48.329 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:46:48.329 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:47:48.330 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:47:48.330 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:47:48.331 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986468 2026-03-31T19:47:48.331 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2068 2026-03-31T19:47:48.331 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2068 -ge 3600 ']' 2026-03-31T19:47:48.331 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:47:48.331 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:47:48.663 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 36m) [leader: a] 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 36m) 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 35m), 6 in (since 36m) 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.66k objects, 51 GiB 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: usage: 41 GiB used, 499 GiB / 540 GiB avail 2026-03-31T19:47:48.962 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:47:48.963 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:47:48.963 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:47:48.963 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.3 MiB/s wr, 0 op/s rd, 480 op/s wr 2026-03-31T19:47:48.963 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:47:48.979 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:47:48.979 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:47:49.094 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:47:49.094 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 666415, 2026-03-31T19:47:49.094 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 28.881391841, 2026-03-31T19:47:49.094 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:47:49.094 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3243, 2026-03-31T19:47:49.094 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:47:49.094 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2354095, 2026-03-31T19:47:49.094 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 642896, 2026-03-31T19:47:49.094 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2185121, 2026-03-31T19:47:49.094 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 170430, 2026-03-31T19:47:49.094 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 55593, 2026-03-31T19:47:49.095 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24626, 2026-03-31T19:47:49.095 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:47:49.095 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 31, 2026-03-31T19:47:49.095 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 35172, 2026-03-31T19:47:49.095 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:47:49.095 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 440043, 2026-03-31T19:47:49.095 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:47:49.095 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:47:49.095 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:47:49.095 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:47:49.095 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:47:49.095 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:47:49.095 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:48:49.096 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:48:49.096 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:48:49.096 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986529 2026-03-31T19:48:49.096 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2129 2026-03-31T19:48:49.096 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2129 -ge 3600 ']' 2026-03-31T19:48:49.097 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:48:49.097 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:48:49.405 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:48:49.693 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 37m) [leader: a] 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 37m) 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 37m), 6 in (since 37m) 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.70k objects, 51 GiB 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: usage: 41 GiB used, 499 GiB / 540 GiB avail 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.5 MiB/s wr, 0 op/s rd, 645 op/s wr 2026-03-31T19:48:49.694 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:48:49.707 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:48:49.707 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 685653, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 29.992459106, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3260, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2420364, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 661024, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2254736, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 178430, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 56211, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24592, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 28, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 22884, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 454737, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:48:49.815 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:48:49.816 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:48:49.816 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:48:49.816 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:48:49.816 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:48:49.816 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:49:49.816 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:49:49.817 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:49:49.817 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986589 2026-03-31T19:49:49.817 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2189 2026-03-31T19:49:49.817 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2189 -ge 3600 ']' 2026-03-31T19:49:49.817 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:49:49.818 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:49:50.130 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 38m) [leader: a] 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 38m) 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 38m), 6 in (since 38m) 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.74k objects, 51 GiB 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: usage: 42 GiB used, 498 GiB / 540 GiB avail 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.4 MiB/s wr, 0 op/s rd, 647 op/s wr 2026-03-31T19:49:50.437 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:49:50.452 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:49:50.452 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 704681, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 31.107575361, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3207, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2486161, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 679017, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2322472, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 186276, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 56044, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24468, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 28, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 22884, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 469037, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:49:50.553 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:50:50.554 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:50:50.554 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:50:50.555 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986650 2026-03-31T19:50:50.555 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2250 2026-03-31T19:50:50.555 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2250 -ge 3600 ']' 2026-03-31T19:50:50.555 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:50:50.555 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:50:50.885 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 39m) [leader: a] 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 39m) 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 39m), 6 in (since 39m) 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.77k objects, 52 GiB 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: usage: 41 GiB used, 499 GiB / 540 GiB avail 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.1 MiB/s wr, 0 op/s rd, 447 op/s wr 2026-03-31T19:50:51.201 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:50:51.221 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:50:51.221 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:50:51.329 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:50:51.329 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 723941, 2026-03-31T19:50:51.329 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 32.218556570, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3210, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2552400, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 697095, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2391718, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 194172, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 56657, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24468, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 38, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 63844, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 483641, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:50:51.330 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:51:51.330 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:51:51.331 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:51:51.331 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986711 2026-03-31T19:51:51.331 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2311 2026-03-31T19:51:51.332 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2311 -ge 3600 ']' 2026-03-31T19:51:51.332 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:51:51.332 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:51:51.657 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 40m) [leader: a] 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 40m) 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 40m), 6 in (since 40m) 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.79k objects, 52 GiB 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: usage: 41 GiB used, 499 GiB / 540 GiB avail 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.9 MiB/s wr, 0 op/s rd, 567 op/s wr 2026-03-31T19:51:51.961 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:51:51.975 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:51:51.975 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 743469, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 33.320047243, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3228, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2619756, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 715554, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2462272, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 202404, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 56566, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24454, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 35, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 51556, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 498592, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:51:52.093 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:52:52.094 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:52:52.094 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:52:52.095 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986772 2026-03-31T19:52:52.095 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2372 2026-03-31T19:52:52.095 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2372 -ge 3600 ']' 2026-03-31T19:52:52.095 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:52:52.095 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:52:52.415 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 41m) [leader: a] 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 41m) 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 41m), 6 in (since 41m) 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.81k objects, 52 GiB 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: usage: 41 GiB used, 499 GiB / 540 GiB avail 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.6 MiB/s wr, 0 op/s rd, 666 op/s wr 2026-03-31T19:52:52.723 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:52:52.737 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:52:52.737 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:52:52.838 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:52:52.838 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 763493, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 34.565449689, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3248, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2688121, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 734204, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2534628, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 210843, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 56744, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24626, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 35, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 51556, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 513874, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:52:52.839 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:53:52.840 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:53:52.840 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:53:52.841 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986832 2026-03-31T19:53:52.841 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2432 2026-03-31T19:53:52.841 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2432 -ge 3600 ']' 2026-03-31T19:53:52.841 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:53:52.841 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:53:53.236 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 42m) [leader: a] 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 42m) 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 42m), 6 in (since 42m) 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.83k objects, 52 GiB 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: usage: 43 GiB used, 497 GiB / 540 GiB avail 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.4 MiB/s wr, 0 op/s rd, 650 op/s wr 2026-03-31T19:53:53.542 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:53:53.557 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:53:53.557 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 782603, 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 35.662835718, 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3159, 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2754496, 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 752387, 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2604598, 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 219408, 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 57163, 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24638, 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 35, 2026-03-31T19:53:53.660 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 51556, 2026-03-31T19:53:53.661 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:53:53.661 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 528839, 2026-03-31T19:53:53.661 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:53:53.661 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:53:53.661 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:53:53.661 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:53:53.661 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:53:53.661 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:53:53.661 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:54:53.662 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:54:53.662 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:54:53.663 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986893 2026-03-31T19:54:53.663 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2493 2026-03-31T19:54:53.663 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2493 -ge 3600 ']' 2026-03-31T19:54:53.663 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:54:53.663 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:54:53.984 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:54:54.282 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:54:54.282 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:54:54.282 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:54:54.282 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:54:54.282 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:54:54.282 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:54:54.282 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 43m) [leader: a] 2026-03-31T19:54:54.282 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 43m) 2026-03-31T19:54:54.282 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 43m), 6 in (since 43m) 2026-03-31T19:54:54.282 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:54:54.282 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:54:54.282 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:54:54.283 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.84k objects, 52 GiB 2026-03-31T19:54:54.283 INFO:tasks.workunit.client.0.vm01.stdout: usage: 44 GiB used, 496 GiB / 540 GiB avail 2026-03-31T19:54:54.283 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:54:54.283 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:54:54.283 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:54:54.283 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.7 MiB/s wr, 0 op/s rd, 535 op/s wr 2026-03-31T19:54:54.283 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:54:54.296 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:54:54.296 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:54:54.396 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:54:54.396 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 802309, 2026-03-31T19:54:54.396 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 36.914716920, 2026-03-31T19:54:54.396 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3189, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 1, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2821645, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 770929, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2675913, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 228120, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 57381, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24832, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 39, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 67940, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 544107, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:54:54.397 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:55:54.398 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:55:54.398 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:55:54.399 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774986954 2026-03-31T19:55:54.399 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2554 2026-03-31T19:55:54.399 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2554 -ge 3600 ']' 2026-03-31T19:55:54.399 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:55:54.399 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:55:54.721 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 44m) [leader: a] 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 44m) 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 44m), 6 in (since 44m) 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.86k objects, 53 GiB 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: usage: 44 GiB used, 496 GiB / 540 GiB avail 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.5 MiB/s wr, 0 op/s rd, 505 op/s wr 2026-03-31T19:55:55.021 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:55:55.035 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:55:55.035 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 821895, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 38.042103430, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3218, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2888854, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 789280, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2747308, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 236722, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 57182, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24604, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 31, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 35172, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 559378, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:55:55.141 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:56:55.142 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:56:55.142 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:56:55.143 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987015 2026-03-31T19:56:55.143 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2615 2026-03-31T19:56:55.143 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2615 -ge 3600 ']' 2026-03-31T19:56:55.143 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:56:55.143 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:56:55.473 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 45m) [leader: a] 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 45m) 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 45m), 6 in (since 45m) 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.87k objects, 53 GiB 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: usage: 42 GiB used, 498 GiB / 540 GiB avail 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.2 MiB/s wr, 0 op/s rd, 455 op/s wr 2026-03-31T19:56:55.802 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:56:55.817 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:56:55.817 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 841087, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 39.262220932, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3166, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 2954674, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 807485, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2817276, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 245471, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 57550, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24771, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 36, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 55652, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 574444, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:56:55.925 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:57:55.926 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:57:55.926 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:57:55.927 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987075 2026-03-31T19:57:55.927 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2675 2026-03-31T19:57:55.927 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2675 -ge 3600 ']' 2026-03-31T19:57:55.927 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:57:55.927 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:57:56.234 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 46m) [leader: a] 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 46m) 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 46m), 6 in (since 46m) 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.88k objects, 53 GiB 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: usage: 44 GiB used, 496 GiB / 540 GiB avail 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.3 MiB/s wr, 0 op/s rd, 474 op/s wr 2026-03-31T19:57:56.522 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:57:56.535 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:57:56.535 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 861145, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 40.458843840, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3145, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 1, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3023278, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 826319, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2890565, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 254570, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 57771, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24729, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 28, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 22884, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 590035, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:57:56.639 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:58:56.640 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:58:56.640 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:58:56.641 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987136 2026-03-31T19:58:56.641 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2736 2026-03-31T19:58:56.641 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2736 -ge 3600 ']' 2026-03-31T19:58:56.641 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:58:56.641 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:58:56.985 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 47m) [leader: a] 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 47m) 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 47m), 6 in (since 47m) 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.90k objects, 53 GiB 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: usage: 44 GiB used, 496 GiB / 540 GiB avail 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.7 MiB/s wr, 0 op/s rd, 667 op/s wr 2026-03-31T19:58:57.298 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:58:57.315 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:58:57.315 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:58:57.425 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:58:57.425 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 881801, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 41.669490988, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3116, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3093645, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 845692, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 2965544, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 263835, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 57826, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24796, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 42, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 84324, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 606067, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:58:57.426 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T19:59:57.427 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T19:59:57.427 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T19:59:57.428 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987197 2026-03-31T19:59:57.428 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2797 2026-03-31T19:59:57.428 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2797 -ge 3600 ']' 2026-03-31T19:59:57.428 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T19:59:57.428 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T19:59:57.738 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 48m) [leader: a] 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 48m) 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 48m), 6 in (since 48m) 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.91k objects, 53 GiB 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: usage: 45 GiB used, 495 GiB / 540 GiB avail 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.9 MiB/s wr, 0 op/s rd, 726 op/s wr 2026-03-31T19:59:58.037 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T19:59:58.050 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T19:59:58.050 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 901171, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 42.801418524, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3144, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3160252, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 863911, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 3037326, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 272630, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 58079, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24804, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 37, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 59748, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 621510, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T19:59:58.156 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:00:58.157 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:00:58.157 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:00:58.158 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987258 2026-03-31T20:00:58.158 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2858 2026-03-31T20:00:58.158 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2858 -ge 3600 ']' 2026-03-31T20:00:58.158 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:00:58.158 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T20:00:58.476 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:00:58.768 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:00:58.768 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:00:58.768 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T20:00:58.768 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 49m) [leader: a] 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 49m) 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 49m), 6 in (since 49m) 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.92k objects, 53 GiB 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: usage: 45 GiB used, 495 GiB / 540 GiB avail 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.5 MiB/s wr, 0 op/s rd, 655 op/s wr 2026-03-31T20:00:58.769 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:00:58.781 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T20:00:58.781 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 922265, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 44.012640640, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3064, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3232272, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 883728, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 3115306, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 282666, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 59092, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 25015, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 28, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 26980, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 638161, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T20:00:58.887 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T20:00:58.888 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:00:58.888 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T20:00:58.888 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T20:00:58.888 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T20:00:58.888 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:01:58.889 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:01:58.889 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:01:58.889 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987318 2026-03-31T20:01:58.890 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2918 2026-03-31T20:01:58.890 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2918 -ge 3600 ']' 2026-03-31T20:01:58.890 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:01:58.890 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T20:01:59.206 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 50m) [leader: a] 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 50m) 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 50m), 6 in (since 50m) 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.93k objects, 54 GiB 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: usage: 43 GiB used, 497 GiB / 540 GiB avail 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.6 MiB/s wr, 0 op/s rd, 661 op/s wr 2026-03-31T20:01:59.503 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:01:59.517 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T20:01:59.518 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T20:01:59.619 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 943667, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 45.242221513, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3070, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 1, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3304942, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 903678, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 3194638, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 292542, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 59345, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 25015, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 34, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 51556, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 655133, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T20:01:59.620 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:02:59.621 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:02:59.621 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:02:59.622 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987379 2026-03-31T20:02:59.622 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=2979 2026-03-31T20:02:59.622 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 2979 -ge 3600 ']' 2026-03-31T20:02:59.622 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:02:59.622 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T20:02:59.931 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:03:00.226 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:03:00.226 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:03:00.226 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T20:03:00.226 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T20:03:00.226 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:03:00.226 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:03:00.226 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 51m) [leader: a] 2026-03-31T20:03:00.226 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 51m) 2026-03-31T20:03:00.227 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 51m), 6 in (since 51m) 2026-03-31T20:03:00.227 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:03:00.227 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:03:00.227 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T20:03:00.227 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.93k objects, 54 GiB 2026-03-31T20:03:00.227 INFO:tasks.workunit.client.0.vm01.stdout: usage: 43 GiB used, 497 GiB / 540 GiB avail 2026-03-31T20:03:00.227 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T20:03:00.227 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:03:00.227 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T20:03:00.227 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.9 MiB/s wr, 0 op/s rd, 713 op/s wr 2026-03-31T20:03:00.227 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:03:00.240 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T20:03:00.240 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 964713, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 46.535099392, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3072, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3376796, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 923650, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 3272745, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 302798, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 58528, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24929, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 35, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 51556, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 671974, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T20:03:00.343 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:03:00.344 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T20:03:00.344 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T20:03:00.344 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T20:03:00.344 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:04:00.345 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:04:00.345 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:04:00.345 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987440 2026-03-31T20:04:00.345 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=3040 2026-03-31T20:04:00.346 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 3040 -ge 3600 ']' 2026-03-31T20:04:00.346 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:04:00.346 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T20:04:00.650 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 52m) [leader: a] 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 52m) 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 52m), 6 in (since 52m) 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.94k objects, 54 GiB 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: usage: 45 GiB used, 495 GiB / 540 GiB avail 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.6 MiB/s wr, 0 op/s rd, 680 op/s wr 2026-03-31T20:04:00.936 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:04:00.951 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T20:04:00.951 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T20:04:01.073 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T20:04:01.073 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 985841, 2026-03-31T20:04:01.073 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 47.776380763, 2026-03-31T20:04:01.073 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:04:01.073 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3104, 2026-03-31T20:04:01.073 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T20:04:01.073 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3449920, 2026-03-31T20:04:01.073 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 943748, 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 3352026, 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 313058, 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 59101, 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24927, 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 31, 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 35172, 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 689068, 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T20:04:01.074 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:05:01.075 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:05:01.075 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:05:01.076 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987501 2026-03-31T20:05:01.076 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=3101 2026-03-31T20:05:01.076 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 3101 -ge 3600 ']' 2026-03-31T20:05:01.076 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:05:01.076 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T20:05:01.382 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 53m) [leader: a] 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 53m) 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 53m), 6 in (since 53m) 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.94k objects, 54 GiB 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: usage: 45 GiB used, 495 GiB / 540 GiB avail 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.9 MiB/s wr, 0 op/s rd, 557 op/s wr 2026-03-31T20:05:01.692 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:05:01.705 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T20:05:01.706 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 1006679, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 49.096479027, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3052, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3521262, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 963351, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 3430446, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 323081, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 59537, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24915, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 30, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 31076, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 705819, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T20:05:01.807 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:06:01.808 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:06:01.808 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:06:01.809 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987561 2026-03-31T20:06:01.809 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=3161 2026-03-31T20:06:01.809 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 3161 -ge 3600 ']' 2026-03-31T20:06:01.809 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:06:01.809 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T20:06:02.110 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 54m) [leader: a] 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 54m) 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 54m), 6 in (since 54m) 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.95k objects, 54 GiB 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: usage: 46 GiB used, 494 GiB / 540 GiB avail 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.9 MiB/s wr, 0 op/s rd, 568 op/s wr 2026-03-31T20:06:02.400 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:06:02.415 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T20:06:02.416 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T20:06:02.514 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T20:06:02.514 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 1028343, 2026-03-31T20:06:02.514 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 50.418141477, 2026-03-31T20:06:02.514 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:06:02.514 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3057, 2026-03-31T20:06:02.514 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T20:06:02.514 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3595174, 2026-03-31T20:06:02.514 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 983642, 2026-03-31T20:06:02.514 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 3511378, 2026-03-31T20:06:02.514 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 333594, 2026-03-31T20:06:02.514 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 59757, 2026-03-31T20:06:02.515 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 25084, 2026-03-31T20:06:02.515 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T20:06:02.515 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 27, 2026-03-31T20:06:02.515 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 18788, 2026-03-31T20:06:02.515 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:06:02.515 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 723265, 2026-03-31T20:06:02.515 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T20:06:02.515 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T20:06:02.515 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:06:02.515 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T20:06:02.515 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T20:06:02.515 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T20:06:02.515 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:07:02.516 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:07:02.516 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:07:02.517 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987622 2026-03-31T20:07:02.517 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=3222 2026-03-31T20:07:02.517 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 3222 -ge 3600 ']' 2026-03-31T20:07:02.517 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:07:02.517 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T20:07:02.819 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:07:03.123 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:07:03.123 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:07:03.123 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T20:07:03.123 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T20:07:03.123 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:07:03.123 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:07:03.124 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 55m) [leader: a] 2026-03-31T20:07:03.124 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 55m) 2026-03-31T20:07:03.124 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 55m), 6 in (since 55m) 2026-03-31T20:07:03.124 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:07:03.124 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:07:03.124 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T20:07:03.124 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.95k objects, 54 GiB 2026-03-31T20:07:03.124 INFO:tasks.workunit.client.0.vm01.stdout: usage: 44 GiB used, 496 GiB / 540 GiB avail 2026-03-31T20:07:03.124 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T20:07:03.124 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:07:03.124 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T20:07:03.124 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.7 MiB/s wr, 0 op/s rd, 537 op/s wr 2026-03-31T20:07:03.124 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:07:03.137 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T20:07:03.137 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 1049657, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 51.701207804, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3047, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3667248, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 1003569, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 3590389, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 344011, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 60187, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 25194, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 37, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 59748, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 740213, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T20:07:03.243 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:08:03.244 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:08:03.244 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:08:03.245 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987683 2026-03-31T20:08:03.245 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=3283 2026-03-31T20:08:03.245 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 3283 -ge 3600 ']' 2026-03-31T20:08:03.245 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:08:03.245 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T20:08:03.549 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 56m) [leader: a] 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 56m) 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 56m), 6 in (since 56m) 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.96k objects, 54 GiB 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: usage: 46 GiB used, 494 GiB / 540 GiB avail 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.5 MiB/s wr, 0 op/s rd, 511 op/s wr 2026-03-31T20:08:03.845 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:08:03.858 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T20:08:03.858 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 1070963, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 53.022966291, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3095, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3740117, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 1023742, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 3671932, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 354721, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 59296, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 25131, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 32, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 43364, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 757952, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T20:08:03.960 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T20:08:03.961 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:09:03.961 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:09:03.962 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:09:03.962 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987743 2026-03-31T20:09:03.962 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=3343 2026-03-31T20:09:03.962 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 3343 -ge 3600 ']' 2026-03-31T20:09:03.962 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:09:03.963 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T20:09:04.262 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:09:04.561 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:09:04.561 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:09:04.561 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T20:09:04.561 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 57m) [leader: a] 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 57m) 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 57m), 6 in (since 57m) 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.97k objects, 54 GiB 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: usage: 47 GiB used, 493 GiB / 540 GiB avail 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.9 MiB/s wr, 0 op/s rd, 571 op/s wr 2026-03-31T20:09:04.562 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:09:04.575 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T20:09:04.575 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 1091747, 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 54.324766396, 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3045, 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 1, 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3811198, 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 1043309, 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 3750781, 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 365247, 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 60669, 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 25136, 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 36, 2026-03-31T20:09:04.681 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 55652, 2026-03-31T20:09:04.682 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:09:04.682 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 774920, 2026-03-31T20:09:04.682 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T20:09:04.682 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T20:09:04.682 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:09:04.682 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T20:09:04.682 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T20:09:04.682 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T20:09:04.682 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:10:04.683 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:10:04.683 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:10:04.683 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987804 2026-03-31T20:10:04.684 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=3404 2026-03-31T20:10:04.684 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 3404 -ge 3600 ']' 2026-03-31T20:10:04.684 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:10:04.684 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T20:10:04.992 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 58m) [leader: a] 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 58m) 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 58m), 6 in (since 58m) 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.97k objects, 54 GiB 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: usage: 47 GiB used, 493 GiB / 540 GiB avail 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.9 MiB/s wr, 0 op/s rd, 578 op/s wr 2026-03-31T20:10:05.289 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:10:05.302 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T20:10:05.302 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 1113247, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 55.650950902, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3091, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 2, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3884233, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 1063363, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 3832150, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 375764, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 59751, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 24956, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 32, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 39268, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 792210, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T20:10:05.406 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:11:05.407 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:11:05.407 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:11:05.408 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987865 2026-03-31T20:11:05.408 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=3465 2026-03-31T20:11:05.408 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 3465 -ge 3600 ']' 2026-03-31T20:11:05.408 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:11:05.408 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T20:11:08.722 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 59m) [leader: a] 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 59m) 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 59m), 6 in (since 59m) 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.97k objects, 55 GiB 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: usage: 47 GiB used, 493 GiB / 540 GiB avail 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.6 MiB/s wr, 0 op/s rd, 514 op/s wr 2026-03-31T20:11:09.012 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:11:09.025 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T20:11:09.025 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 1136013, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 57.110954141, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3038, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 3961358, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 1084628, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 3918328, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 387129, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 60307, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 25144, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 36, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 55652, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 810772, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T20:11:09.123 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:12:09.124 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:12:09.125 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:12:09.125 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987929 2026-03-31T20:12:09.125 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=3529 2026-03-31T20:12:09.125 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 3529 -ge 3600 ']' 2026-03-31T20:12:09.125 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:12:09.125 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T20:12:09.428 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:12:09.716 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:12:09.716 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:12:09.716 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T20:12:09.716 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T20:12:09.716 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:12:09.716 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:12:09.716 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 60m) [leader: a] 2026-03-31T20:12:09.717 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 60m) 2026-03-31T20:12:09.717 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 60m), 6 in (since 60m) 2026-03-31T20:12:09.717 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:12:09.717 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:12:09.717 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T20:12:09.717 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.97k objects, 55 GiB 2026-03-31T20:12:09.717 INFO:tasks.workunit.client.0.vm01.stdout: usage: 47 GiB used, 493 GiB / 540 GiB avail 2026-03-31T20:12:09.717 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T20:12:09.717 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:12:09.717 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T20:12:09.717 INFO:tasks.workunit.client.0.vm01.stdout: client: 4.2 MiB/s wr, 0 op/s rd, 602 op/s wr 2026-03-31T20:12:09.717 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:12:09.730 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T20:12:09.730 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T20:12:09.837 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 1157453, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 58.535914905, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3041, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 1, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 4034286, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 1104792, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 3999769, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 398055, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 60433, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 25144, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 32, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 39268, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 828462, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T20:12:09.838 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:13:09.839 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:13:09.839 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:13:09.840 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774987989 2026-03-31T20:13:09.840 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=3589 2026-03-31T20:13:09.840 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 3589 -ge 3600 ']' 2026-03-31T20:13:09.840 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:13:09.840 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -i 'osd.*down' 2026-03-31T20:13:10.149 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_WARN 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: 1 pool(s) do not have an application enabled 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 61m) [leader: a] 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 61m) 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 61m), 6 in (since 61m) 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: pools: 3 pools, 25 pgs 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: objects: 49.98k objects, 55 GiB 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: usage: 46 GiB used, 494 GiB / 540 GiB avail 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 25 active+clean 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: io: 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: client: 3.5 MiB/s wr, 0 op/s rd, 508 op/s wr 2026-03-31T20:13:10.455 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:13:10.469 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph tell osd.0 perf dump bluestore 2026-03-31T20:13:10.469 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -A 2 onode 2026-03-31T20:13:10.571 INFO:tasks.workunit.client.0.vm01.stdout: "read_onode_meta_lat": { 2026-03-31T20:13:10.571 INFO:tasks.workunit.client.0.vm01.stdout: "avgcount": 1178757, 2026-03-31T20:13:10.571 INFO:tasks.workunit.client.0.vm01.stdout: "sum": 59.872065666, 2026-03-31T20:13:10.571 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "onodes": 3028, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "onodes_pinned": 0, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "onode_hits": 4106777, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "onode_misses": 1124789, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_hits": 4081140, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "onode_shard_misses": 408942, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "onode_extents": 61467, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "onode_blobs": 25305, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "onode_spanning_blobs": 0, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "buffers": 28, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "buffer_bytes": 22884, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "onode_reshard": 845943, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "blob_split": 0, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "extent_compress": 0, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout:-- 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_onode_meta_count": 0, 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: "slow_read_wait_aio_count": 0 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stdout: } 2026-03-31T20:13:10.572 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 60 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout:test-ec-esb: (groupid=0, jobs=1): err= 0: pid=23846: Tue Mar 31 20:13:20 2026 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: write: IOPS=163, BW=1182KiB/s (1211kB/s)(4157MiB/3600223msec) 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: slat (nsec): min=1263, max=4055.8k, avg=17398.41, stdev=17894.83 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: clat (msec): min=2, max=2234, avg=195.13, stdev=192.57 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec): min=2, max=2234, avg=195.14, stdev=192.57 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: clat percentiles (msec): 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: | 1.00th=[ 19], 5.00th=[ 39], 10.00th=[ 54], 20.00th=[ 78], 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: | 30.00th=[ 100], 40.00th=[ 121], 50.00th=[ 142], 60.00th=[ 169], 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: | 70.00th=[ 201], 80.00th=[ 247], 90.00th=[ 372], 95.00th=[ 609], 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: | 99.00th=[ 1028], 99.50th=[ 1167], 99.90th=[ 1485], 99.95th=[ 1636], 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: | 99.99th=[ 1938] 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: bw ( KiB/s): min= 176, max= 4144, per=27.22%, avg=1182.54, stdev=270.16, samples=7199 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: iops : min= 24, max= 494, avg=163.99, stdev=32.50, samples=7199 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec) : 4=0.01%, 10=0.23%, 20=1.01%, 50=7.61%, 100=21.67% 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec) : 250=50.03%, 500=12.70%, 750=3.62%, 1000=2.00%, 2000=1.12% 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec) : >=2000=0.01% 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: cpu : usr=0.25%, sys=0.11%, ctx=662187, majf=0, minf=554 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: issued rwts: total=0,590357,0,0 short=0,0,0,0 dropped=0,0,0,0 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout: latency : target=0.00ns, window=0.00ns, percentile=100.00%, depth=32 2026-03-31T20:13:20.247 INFO:tasks.workunit.client.0.vm01.stdout:test-ec-esb: (groupid=0, jobs=1): err= 0: pid=23847: Tue Mar 31 20:13:20 2026 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: write: IOPS=153, BW=1110KiB/s (1137kB/s)(3903MiB/3600219msec) 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: slat (nsec): min=1543, max=3629.8k, avg=15457.41, stdev=15325.27 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: clat (usec): min=1955, max=2287.0k, avg=207834.21, stdev=205118.11 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: lat (usec): min=1986, max=2287.0k, avg=207849.67, stdev=205117.61 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: clat percentiles (msec): 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 1.00th=[ 17], 5.00th=[ 37], 10.00th=[ 54], 20.00th=[ 80], 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 30.00th=[ 103], 40.00th=[ 126], 50.00th=[ 150], 60.00th=[ 176], 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 70.00th=[ 209], 80.00th=[ 264], 90.00th=[ 430], 95.00th=[ 676], 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 99.00th=[ 1045], 99.50th=[ 1183], 99.90th=[ 1502], 99.95th=[ 1653], 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 99.99th=[ 1955] 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: bw ( KiB/s): min= 16, max= 6784, per=25.56%, avg=1110.35, stdev=342.69, samples=7200 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: iops : min= 4, max= 858, avg=153.96, stdev=41.52, samples=7200 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec) : 2=0.01%, 4=0.02%, 10=0.30%, 20=1.11%, 50=7.41% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec) : 100=20.00%, 250=49.31%, 500=13.59%, 750=4.49%, 1000=2.50% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec) : 2000=1.27%, >=2000=0.01% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: cpu : usr=0.24%, sys=0.10%, ctx=565387, majf=0, minf=593 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: issued rwts: total=0,554270,0,0 short=0,0,0,0 dropped=0,0,0,0 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: latency : target=0.00ns, window=0.00ns, percentile=100.00%, depth=32 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout:test-ec-esb: (groupid=0, jobs=1): err= 0: pid=23848: Tue Mar 31 20:13:20 2026 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: write: IOPS=159, BW=1151KiB/s (1179kB/s)(4048MiB/3600210msec) 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: slat (nsec): min=1363, max=2001.4k, avg=14036.71, stdev=11724.36 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: clat (usec): min=1989, max=2283.4k, avg=200292.99, stdev=186544.02 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec): min=2, max=2283, avg=200.31, stdev=186.54 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: clat percentiles (msec): 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 1.00th=[ 17], 5.00th=[ 38], 10.00th=[ 54], 20.00th=[ 82], 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 30.00th=[ 107], 40.00th=[ 130], 50.00th=[ 155], 60.00th=[ 180], 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 70.00th=[ 211], 80.00th=[ 257], 90.00th=[ 368], 95.00th=[ 600], 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 99.00th=[ 1003], 99.50th=[ 1116], 99.90th=[ 1418], 99.95th=[ 1569], 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 99.99th=[ 1854] 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: bw ( KiB/s): min= 160, max= 5584, per=26.51%, avg=1151.41, stdev=309.40, samples=7199 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: iops : min= 28, max= 742, avg=159.77, stdev=37.28, samples=7199 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec) : 2=0.01%, 4=0.02%, 10=0.33%, 20=1.12%, 50=7.30% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec) : 100=18.76%, 250=51.43%, 500=14.43%, 750=3.64%, 1000=1.98% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec) : 2000=0.99%, >=2000=0.01% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: cpu : usr=0.25%, sys=0.10%, ctx=545950, majf=0, minf=563 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: issued rwts: total=0,575140,0,0 short=0,0,0,0 dropped=0,0,0,0 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: latency : target=0.00ns, window=0.00ns, percentile=100.00%, depth=32 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout:test-ec-esb: (groupid=0, jobs=1): err= 0: pid=23849: Tue Mar 31 20:13:20 2026 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: write: IOPS=123, BW=899KiB/s (920kB/s)(3159MiB/3600197msec) 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: slat (nsec): min=1463, max=2041.5k, avg=19841.47, stdev=18580.90 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: clat (msec): min=2, max=2498, avg=259.48, stdev=263.53 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec): min=2, max=2498, avg=259.50, stdev=263.53 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: clat percentiles (msec): 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 1.00th=[ 17], 5.00th=[ 40], 10.00th=[ 56], 20.00th=[ 84], 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 30.00th=[ 109], 40.00th=[ 136], 50.00th=[ 165], 60.00th=[ 203], 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 70.00th=[ 262], 80.00th=[ 376], 90.00th=[ 634], 95.00th=[ 844], 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 99.00th=[ 1250], 99.50th=[ 1418], 99.90th=[ 1754], 99.95th=[ 1888], 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: | 99.99th=[ 2106] 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: bw ( KiB/s): min= 8, max= 6056, per=20.68%, avg=898.79, stdev=363.55, samples=7199 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: iops : min= 2, max= 732, avg=123.33, stdev=46.34, samples=7199 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec) : 4=0.02%, 10=0.34%, 20=1.06%, 50=6.69%, 100=18.38% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec) : 250=42.09%, 500=17.09%, 750=7.45%, 1000=4.14%, 2000=2.70% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: lat (msec) : >=2000=0.02% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: cpu : usr=0.19%, sys=0.09%, ctx=529673, majf=0, minf=541 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: issued rwts: total=0,443941,0,0 short=0,0,0,0 dropped=0,0,0,0 2026-03-31T20:13:20.248 INFO:tasks.workunit.client.0.vm01.stdout: latency : target=0.00ns, window=0.00ns, percentile=100.00%, depth=32 2026-03-31T20:13:20.249 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:13:20.249 INFO:tasks.workunit.client.0.vm01.stdout:Run status group 0 (all jobs): 2026-03-31T20:13:20.249 INFO:tasks.workunit.client.0.vm01.stdout: WRITE: bw=4342KiB/s (4447kB/s), 899KiB/s-1182KiB/s (920kB/s-1211kB/s), io=14.9GiB (16.0GB), run=3600197-3600223msec 2026-03-31T20:14:10.573 INFO:tasks.workunit.client.0.vm01.stderr:+ true 2026-03-31T20:14:10.573 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-31T20:14:10.574 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774988050 2026-03-31T20:14:10.574 INFO:tasks.workunit.client.0.vm01.stderr:+ ELAPSED=3650 2026-03-31T20:14:10.574 INFO:tasks.workunit.client.0.vm01.stdout:Reached 1-hour timeout, stopping FIO 2026-03-31T20:14:10.574 INFO:tasks.workunit.client.0.vm01.stderr:+ '[' 3650 -ge 3600 ']' 2026-03-31T20:14:10.574 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'Reached 1-hour timeout, stopping FIO' 2026-03-31T20:14:10.574 INFO:tasks.workunit.client.0.vm01.stderr:+ break 2026-03-31T20:14:10.574 INFO:tasks.workunit.client.0.vm01.stdout:[ec-esb-fio] FIO test completed, log checks to follow 2026-03-31T20:14:10.574 INFO:tasks.workunit.client.0.vm01.stderr:+ echo '[ec-esb-fio] FIO test completed, log checks to follow' 2026-03-31T20:14:10.574 INFO:tasks.workunit.client.0.vm01.stderr:+ exit 0 2026-03-31T20:14:10.574 INFO:tasks.workunit.client.0.vm01.stderr:+ cleanup 2026-03-31T20:14:10.574 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd pool rm ecpool ecpool --yes-i-really-really-mean-it 2026-03-31T20:14:10.947 INFO:tasks.workunit.client.0.vm01.stderr:pool 'ecpool' does not exist 2026-03-31T20:14:10.960 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd erasure-code-profile rm myecprofile 2026-03-31T20:14:11.957 INFO:tasks.workunit.client.0.vm01.stderr:erasure-code-profile myecprofile does not exist 2026-03-31T20:14:11.970 INFO:tasks.workunit.client.0.vm01.stderr:+ rm -rf /home/ubuntu/cephtest/fio 2026-03-31T20:14:12.003 INFO:tasks.workunit.client.0.vm01.stderr:+ status_log 2026-03-31T20:14:12.003 INFO:tasks.workunit.client.0.vm01.stderr:+ echo 'Cluster status on failure:' 2026-03-31T20:14:12.003 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph -s 2026-03-31T20:14:12.003 INFO:tasks.workunit.client.0.vm01.stdout:Cluster status on failure: 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: cluster: 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: id: 0c165360-d647-4244-b40a-096067707e1d 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: health: HEALTH_OK 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: services: 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: mon: 1 daemons, quorum a (age 62m) [leader: a] 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: mgr: x(active, since 62m) 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: osd: 6 osds: 6 up (since 62m), 6 in (since 62m) 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: data: 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: pools: 2 pools, 9 pgs 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: objects: 4 objects, 449 KiB 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: usage: 46 GiB used, 494 GiB / 540 GiB avail 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: pgs: 9 active+clean 2026-03-31T20:14:12.279 INFO:tasks.workunit.client.0.vm01.stdout: 2026-03-31T20:14:12.292 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph health detail 2026-03-31T20:14:12.570 INFO:tasks.workunit.client.0.vm01.stdout:HEALTH_OK 2026-03-31T20:14:12.584 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-31T20:14:12.584 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-31T20:14:12.668 INFO:tasks.workunit:Stopping ['rados/ec-esb-fio.sh'] on client.0... 2026-03-31T20:14:12.668 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-31T20:14:13.131 DEBUG:teuthology.parallel:result is None 2026-03-31T20:14:13.131 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-31T20:14:13.138 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-31T20:14:13.138 DEBUG:teuthology.orchestra.run.vm01:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-31T20:14:13.185 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-31T20:14:13.185 DEBUG:teuthology.run_tasks:Unwinding manager ceph 2026-03-31T20:14:13.187 INFO:tasks.ceph.ceph_manager.ceph:waiting for clean 2026-03-31T20:14:13.187 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-31T20:14:13.394 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T20:14:13.394 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-31T20:14:13.408 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":1896,"stamp":"2026-03-31T20:14:13.223172+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459299,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":59,"num_write_kb":586,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":35,"ondisk_log_size":35,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":66,"num_osds":6,"num_per_pool_osds":6,"num_per_pool_omap_osds":6,"kb":566231040,"kb_used":43467420,"kb_used_data":36672192,"kb_used_omap":28,"kb_used_meta":6669219,"kb_avail":522763620,"statfs":{"total":579820584960,"available":535309946880,"internally_reserved":0,"allocated":37552324608,"data_stored":37551381823,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":28786,"internal_metadata":6829281166},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":80,"apply_latency_ms":80,"commit_latency_ns":80000000,"apply_latency_ns":80000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":-58805018624,"num_objects":-49977,"num_object_clones":0,"num_object_copies":-149931,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":-49977,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":-2163708,"num_write_kb":-15633416,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"8.001048"},"pg_stats":[{"pgid":"1.0","version":"16'32","reported_seq":106,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:14.160209+0000","last_change":"2026-03-31T19:11:51.461485+0000","last_active":"2026-03-31T20:10:14.160209+0000","last_peered":"2026-03-31T20:10:14.160209+0000","last_clean":"2026-03-31T20:10:14.160209+0000","last_became_active":"2026-03-31T19:11:51.461431+0000","last_became_peered":"2026-03-31T19:11:51.461431+0000","last_unstale":"2026-03-31T20:10:14.160209+0000","last_undegraded":"2026-03-31T20:10:14.160209+0000","last_fullsized":"2026-03-31T20:10:14.160209+0000","mapping_epoch":15,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":16,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:47.437529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:47.437529+0000","last_clean_scrub_stamp":"2026-03-31T19:11:47.437529+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T01:50:30.040955+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2],"acting":[5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.3","version":"17'1","reported_seq":65,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:34.220480+0000","last_change":"2026-03-31T19:11:54.469647+0000","last_active":"2026-03-31T20:10:34.220480+0000","last_peered":"2026-03-31T20:10:34.220480+0000","last_clean":"2026-03-31T20:10:34.220480+0000","last_became_active":"2026-03-31T19:11:52.462938+0000","last_became_peered":"2026-03-31T19:11:52.462938+0000","last_unstale":"2026-03-31T20:10:34.220480+0000","last_undegraded":"2026-03-31T20:10:34.220480+0000","last_fullsized":"2026-03-31T20:10:34.220480+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T04:27:38.658998+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00011541599999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":65,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:14.160205+0000","last_change":"2026-03-31T19:11:54.469597+0000","last_active":"2026-03-31T20:10:14.160205+0000","last_peered":"2026-03-31T20:10:14.160205+0000","last_clean":"2026-03-31T20:10:14.160205+0000","last_became_active":"2026-03-31T19:11:52.462306+0000","last_became_peered":"2026-03-31T19:11:52.462306+0000","last_unstale":"2026-03-31T20:10:14.160205+0000","last_undegraded":"2026-03-31T20:10:14.160205+0000","last_fullsized":"2026-03-31T20:10:14.160205+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T02:18:16.715821+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000113282,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3],"acting":[5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":64,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:19.569354+0000","last_change":"2026-03-31T19:11:54.478837+0000","last_active":"2026-03-31T20:10:19.569354+0000","last_peered":"2026-03-31T20:10:19.569354+0000","last_clean":"2026-03-31T20:10:19.569354+0000","last_became_active":"2026-03-31T19:11:52.471581+0000","last_became_peered":"2026-03-31T19:11:52.471581+0000","last_unstale":"2026-03-31T20:10:19.569354+0000","last_undegraded":"2026-03-31T20:10:19.569354+0000","last_fullsized":"2026-03-31T20:10:19.569354+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T03:14:45.480524+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00020954200000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.2","version":"19'2","reported_seq":66,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:34.220474+0000","last_change":"2026-03-31T19:11:54.469656+0000","last_active":"2026-03-31T20:10:34.220474+0000","last_peered":"2026-03-31T20:10:34.220474+0000","last_clean":"2026-03-31T20:10:34.220474+0000","last_became_active":"2026-03-31T19:11:52.462970+0000","last_became_peered":"2026-03-31T19:11:52.462970+0000","last_unstale":"2026-03-31T20:10:34.220474+0000","last_undegraded":"2026-03-31T20:10:34.220474+0000","last_fullsized":"2026-03-31T20:10:34.220474+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-01T20:18:53.640113+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000174816,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":64,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:29.625548+0000","last_change":"2026-03-31T19:11:54.478813+0000","last_active":"2026-03-31T20:10:29.625548+0000","last_peered":"2026-03-31T20:10:29.625548+0000","last_clean":"2026-03-31T20:10:29.625548+0000","last_became_active":"2026-03-31T19:11:52.471594+0000","last_became_peered":"2026-03-31T19:11:52.471594+0000","last_unstale":"2026-03-31T20:10:29.625548+0000","last_undegraded":"2026-03-31T20:10:29.625548+0000","last_fullsized":"2026-03-31T20:10:29.625548+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T01:51:42.337886+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00019145800000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":66,"reported_epoch":28,"state":"active+clean","last_fresh":"2026-03-31T20:14:10.896452+0000","last_change":"2026-03-31T19:11:55.268223+0000","last_active":"2026-03-31T20:14:10.896452+0000","last_peered":"2026-03-31T20:14:10.896452+0000","last_clean":"2026-03-31T20:14:10.896452+0000","last_became_active":"2026-03-31T19:11:52.468346+0000","last_became_peered":"2026-03-31T19:11:52.468346+0000","last_unstale":"2026-03-31T20:14:10.896452+0000","last_undegraded":"2026-03-31T20:14:10.896452+0000","last_fullsized":"2026-03-31T20:14:10.896452+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T04:57:18.273386+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000138889,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0],"acting":[3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"0'0","reported_seq":63,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:24.579283+0000","last_change":"2026-03-31T19:11:54.478711+0000","last_active":"2026-03-31T20:10:24.579283+0000","last_peered":"2026-03-31T20:10:24.579283+0000","last_clean":"2026-03-31T20:10:24.579283+0000","last_became_active":"2026-03-31T19:11:52.470925+0000","last_became_peered":"2026-03-31T19:11:52.470925+0000","last_unstale":"2026-03-31T20:10:24.579283+0000","last_undegraded":"2026-03-31T20:10:24.579283+0000","last_fullsized":"2026-03-31T20:10:24.579283+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T00:50:09.152646+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000115977,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2],"acting":[1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":67,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:11.909414+0000","last_change":"2026-03-31T19:11:55.935961+0000","last_active":"2026-03-31T20:14:11.909414+0000","last_peered":"2026-03-31T20:14:11.909414+0000","last_clean":"2026-03-31T20:14:11.909414+0000","last_became_active":"2026-03-31T19:11:52.464385+0000","last_became_peered":"2026-03-31T19:11:52.464385+0000","last_unstale":"2026-03-31T20:14:11.909414+0000","last_undegraded":"2026-03-31T20:14:11.909414+0000","last_fullsized":"2026-03-31T20:14:11.909414+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T03:10:37.606853+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000107432,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2],"acting":[4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":6},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":864256,"data_stored":857120,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":2,"acting":2,"num_store_stats":3}],"osd_stats":[{"osd":5,"up_from":14,"seq":60129542895,"num_pgs":11,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":13199640,"kb_used_data":12594824,"kb_used_omap":4,"kb_used_meta":604795,"kb_avail":81172200,"statfs":{"total":96636764160,"available":83120332800,"internally_reserved":0,"allocated":12897099776,"data_stored":12896941343,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4472,"internal_metadata":619310728},"hb_peers":[0,1,2,3,4],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":12,"seq":51539608303,"num_pgs":10,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":9185340,"kb_used_data":8572404,"kb_used_omap":5,"kb_used_meta":569146,"kb_avail":85186500,"statfs":{"total":96636764160,"available":87230976000,"internally_reserved":0,"allocated":8778141696,"data_stored":8777983319,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5126,"internal_metadata":582806522},"hb_peers":[1,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":26,"apply_latency_ms":26,"commit_latency_ns":26000000,"apply_latency_ns":26000000},"alerts":[]},{"osd":4,"up_from":11,"seq":47244641007,"num_pgs":10,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5403048,"kb_used_data":4456016,"kb_used_omap":5,"kb_used_meta":947002,"kb_avail":88968792,"statfs":{"total":96636764160,"available":91104043008,"internally_reserved":0,"allocated":4562960384,"data_stored":4562802007,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5123,"internal_metadata":969731069},"hb_peers":[0,1,2,3,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":11,"apply_latency_ms":11,"commit_latency_ns":11000000,"apply_latency_ns":11000000},"alerts":[]},{"osd":3,"up_from":11,"seq":47244641005,"num_pgs":11,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5276160,"kb_used_data":3567676,"kb_used_omap":5,"kb_used_meta":1626810,"kb_avail":89095680,"statfs":{"total":96636764160,"available":91233976320,"internally_reserved":0,"allocated":3653300224,"data_stored":3653141847,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5122,"internal_metadata":1665854462},"hb_peers":[0,1,2,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":26,"apply_latency_ms":26,"commit_latency_ns":26000000,"apply_latency_ns":26000000},"alerts":[]},{"osd":2,"up_from":11,"seq":47244641007,"num_pgs":11,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5278044,"kb_used_data":3745264,"kb_used_omap":4,"kb_used_meta":1532731,"kb_avail":89093796,"statfs":{"total":96636764160,"available":91232047104,"internally_reserved":0,"allocated":3835150336,"data_stored":3834995980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4472,"internal_metadata":1569517192},"hb_peers":[0,1,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":17,"apply_latency_ms":17,"commit_latency_ns":17000000,"apply_latency_ns":17000000},"alerts":[]},{"osd":1,"up_from":11,"seq":47244641007,"num_pgs":13,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5125188,"kb_used_data":3736008,"kb_used_omap":4,"kb_used_meta":1388731,"kb_avail":89246652,"statfs":{"total":96636764160,"available":91388571648,"internally_reserved":0,"allocated":3825672192,"data_stored":3825517327,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4471,"internal_metadata":1422061193},"hb_peers":[0,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-31T20:14:13.409 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-31T20:14:13.575 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T20:14:13.575 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-31T20:14:13.591 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":1896,"stamp":"2026-03-31T20:14:13.223172+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459299,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":59,"num_write_kb":586,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":35,"ondisk_log_size":35,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":66,"num_osds":6,"num_per_pool_osds":6,"num_per_pool_omap_osds":6,"kb":566231040,"kb_used":43467420,"kb_used_data":36672192,"kb_used_omap":28,"kb_used_meta":6669219,"kb_avail":522763620,"statfs":{"total":579820584960,"available":535309946880,"internally_reserved":0,"allocated":37552324608,"data_stored":37551381823,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":28786,"internal_metadata":6829281166},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":80,"apply_latency_ms":80,"commit_latency_ns":80000000,"apply_latency_ns":80000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":-58805018624,"num_objects":-49977,"num_object_clones":0,"num_object_copies":-149931,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":-49977,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":-2163708,"num_write_kb":-15633416,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"8.001048"},"pg_stats":[{"pgid":"1.0","version":"16'32","reported_seq":106,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:14.160209+0000","last_change":"2026-03-31T19:11:51.461485+0000","last_active":"2026-03-31T20:10:14.160209+0000","last_peered":"2026-03-31T20:10:14.160209+0000","last_clean":"2026-03-31T20:10:14.160209+0000","last_became_active":"2026-03-31T19:11:51.461431+0000","last_became_peered":"2026-03-31T19:11:51.461431+0000","last_unstale":"2026-03-31T20:10:14.160209+0000","last_undegraded":"2026-03-31T20:10:14.160209+0000","last_fullsized":"2026-03-31T20:10:14.160209+0000","mapping_epoch":15,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":16,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:47.437529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:47.437529+0000","last_clean_scrub_stamp":"2026-03-31T19:11:47.437529+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T01:50:30.040955+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2],"acting":[5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.3","version":"17'1","reported_seq":65,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:34.220480+0000","last_change":"2026-03-31T19:11:54.469647+0000","last_active":"2026-03-31T20:10:34.220480+0000","last_peered":"2026-03-31T20:10:34.220480+0000","last_clean":"2026-03-31T20:10:34.220480+0000","last_became_active":"2026-03-31T19:11:52.462938+0000","last_became_peered":"2026-03-31T19:11:52.462938+0000","last_unstale":"2026-03-31T20:10:34.220480+0000","last_undegraded":"2026-03-31T20:10:34.220480+0000","last_fullsized":"2026-03-31T20:10:34.220480+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T04:27:38.658998+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00011541599999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":65,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:14.160205+0000","last_change":"2026-03-31T19:11:54.469597+0000","last_active":"2026-03-31T20:10:14.160205+0000","last_peered":"2026-03-31T20:10:14.160205+0000","last_clean":"2026-03-31T20:10:14.160205+0000","last_became_active":"2026-03-31T19:11:52.462306+0000","last_became_peered":"2026-03-31T19:11:52.462306+0000","last_unstale":"2026-03-31T20:10:14.160205+0000","last_undegraded":"2026-03-31T20:10:14.160205+0000","last_fullsized":"2026-03-31T20:10:14.160205+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T02:18:16.715821+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000113282,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3],"acting":[5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":64,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:19.569354+0000","last_change":"2026-03-31T19:11:54.478837+0000","last_active":"2026-03-31T20:10:19.569354+0000","last_peered":"2026-03-31T20:10:19.569354+0000","last_clean":"2026-03-31T20:10:19.569354+0000","last_became_active":"2026-03-31T19:11:52.471581+0000","last_became_peered":"2026-03-31T19:11:52.471581+0000","last_unstale":"2026-03-31T20:10:19.569354+0000","last_undegraded":"2026-03-31T20:10:19.569354+0000","last_fullsized":"2026-03-31T20:10:19.569354+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T03:14:45.480524+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00020954200000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.2","version":"19'2","reported_seq":66,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:34.220474+0000","last_change":"2026-03-31T19:11:54.469656+0000","last_active":"2026-03-31T20:10:34.220474+0000","last_peered":"2026-03-31T20:10:34.220474+0000","last_clean":"2026-03-31T20:10:34.220474+0000","last_became_active":"2026-03-31T19:11:52.462970+0000","last_became_peered":"2026-03-31T19:11:52.462970+0000","last_unstale":"2026-03-31T20:10:34.220474+0000","last_undegraded":"2026-03-31T20:10:34.220474+0000","last_fullsized":"2026-03-31T20:10:34.220474+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-01T20:18:53.640113+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000174816,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":64,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:29.625548+0000","last_change":"2026-03-31T19:11:54.478813+0000","last_active":"2026-03-31T20:10:29.625548+0000","last_peered":"2026-03-31T20:10:29.625548+0000","last_clean":"2026-03-31T20:10:29.625548+0000","last_became_active":"2026-03-31T19:11:52.471594+0000","last_became_peered":"2026-03-31T19:11:52.471594+0000","last_unstale":"2026-03-31T20:10:29.625548+0000","last_undegraded":"2026-03-31T20:10:29.625548+0000","last_fullsized":"2026-03-31T20:10:29.625548+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T01:51:42.337886+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00019145800000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":66,"reported_epoch":28,"state":"active+clean","last_fresh":"2026-03-31T20:14:10.896452+0000","last_change":"2026-03-31T19:11:55.268223+0000","last_active":"2026-03-31T20:14:10.896452+0000","last_peered":"2026-03-31T20:14:10.896452+0000","last_clean":"2026-03-31T20:14:10.896452+0000","last_became_active":"2026-03-31T19:11:52.468346+0000","last_became_peered":"2026-03-31T19:11:52.468346+0000","last_unstale":"2026-03-31T20:14:10.896452+0000","last_undegraded":"2026-03-31T20:14:10.896452+0000","last_fullsized":"2026-03-31T20:14:10.896452+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T04:57:18.273386+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000138889,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0],"acting":[3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"0'0","reported_seq":63,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:24.579283+0000","last_change":"2026-03-31T19:11:54.478711+0000","last_active":"2026-03-31T20:10:24.579283+0000","last_peered":"2026-03-31T20:10:24.579283+0000","last_clean":"2026-03-31T20:10:24.579283+0000","last_became_active":"2026-03-31T19:11:52.470925+0000","last_became_peered":"2026-03-31T19:11:52.470925+0000","last_unstale":"2026-03-31T20:10:24.579283+0000","last_undegraded":"2026-03-31T20:10:24.579283+0000","last_fullsized":"2026-03-31T20:10:24.579283+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T00:50:09.152646+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000115977,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2],"acting":[1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":67,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:11.909414+0000","last_change":"2026-03-31T19:11:55.935961+0000","last_active":"2026-03-31T20:14:11.909414+0000","last_peered":"2026-03-31T20:14:11.909414+0000","last_clean":"2026-03-31T20:14:11.909414+0000","last_became_active":"2026-03-31T19:11:52.464385+0000","last_became_peered":"2026-03-31T19:11:52.464385+0000","last_unstale":"2026-03-31T20:14:11.909414+0000","last_undegraded":"2026-03-31T20:14:11.909414+0000","last_fullsized":"2026-03-31T20:14:11.909414+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T03:10:37.606853+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000107432,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2],"acting":[4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":6},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":864256,"data_stored":857120,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":2,"acting":2,"num_store_stats":3}],"osd_stats":[{"osd":5,"up_from":14,"seq":60129542895,"num_pgs":11,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":13199640,"kb_used_data":12594824,"kb_used_omap":4,"kb_used_meta":604795,"kb_avail":81172200,"statfs":{"total":96636764160,"available":83120332800,"internally_reserved":0,"allocated":12897099776,"data_stored":12896941343,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4472,"internal_metadata":619310728},"hb_peers":[0,1,2,3,4],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":12,"seq":51539608303,"num_pgs":10,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":9185340,"kb_used_data":8572404,"kb_used_omap":5,"kb_used_meta":569146,"kb_avail":85186500,"statfs":{"total":96636764160,"available":87230976000,"internally_reserved":0,"allocated":8778141696,"data_stored":8777983319,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5126,"internal_metadata":582806522},"hb_peers":[1,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":26,"apply_latency_ms":26,"commit_latency_ns":26000000,"apply_latency_ns":26000000},"alerts":[]},{"osd":4,"up_from":11,"seq":47244641007,"num_pgs":10,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5403048,"kb_used_data":4456016,"kb_used_omap":5,"kb_used_meta":947002,"kb_avail":88968792,"statfs":{"total":96636764160,"available":91104043008,"internally_reserved":0,"allocated":4562960384,"data_stored":4562802007,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5123,"internal_metadata":969731069},"hb_peers":[0,1,2,3,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":11,"apply_latency_ms":11,"commit_latency_ns":11000000,"apply_latency_ns":11000000},"alerts":[]},{"osd":3,"up_from":11,"seq":47244641005,"num_pgs":11,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5276160,"kb_used_data":3567676,"kb_used_omap":5,"kb_used_meta":1626810,"kb_avail":89095680,"statfs":{"total":96636764160,"available":91233976320,"internally_reserved":0,"allocated":3653300224,"data_stored":3653141847,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5122,"internal_metadata":1665854462},"hb_peers":[0,1,2,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":26,"apply_latency_ms":26,"commit_latency_ns":26000000,"apply_latency_ns":26000000},"alerts":[]},{"osd":2,"up_from":11,"seq":47244641007,"num_pgs":11,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5278044,"kb_used_data":3745264,"kb_used_omap":4,"kb_used_meta":1532731,"kb_avail":89093796,"statfs":{"total":96636764160,"available":91232047104,"internally_reserved":0,"allocated":3835150336,"data_stored":3834995980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4472,"internal_metadata":1569517192},"hb_peers":[0,1,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":17,"apply_latency_ms":17,"commit_latency_ns":17000000,"apply_latency_ns":17000000},"alerts":[]},{"osd":1,"up_from":11,"seq":47244641007,"num_pgs":13,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5125188,"kb_used_data":3736008,"kb_used_omap":4,"kb_used_meta":1388731,"kb_avail":89246652,"statfs":{"total":96636764160,"available":91388571648,"internally_reserved":0,"allocated":3825672192,"data_stored":3825517327,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4471,"internal_metadata":1422061193},"hb_peers":[0,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-31T20:14:13.591 INFO:tasks.ceph.ceph_manager.ceph:clean! 2026-03-31T20:14:13.592 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-31T20:14:13.771 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T20:14:13.771 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-31T20:14:13.788 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":1896,"stamp":"2026-03-31T20:14:13.223172+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459299,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":59,"num_write_kb":586,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":35,"ondisk_log_size":35,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":66,"num_osds":6,"num_per_pool_osds":6,"num_per_pool_omap_osds":6,"kb":566231040,"kb_used":43467420,"kb_used_data":36672192,"kb_used_omap":28,"kb_used_meta":6669219,"kb_avail":522763620,"statfs":{"total":579820584960,"available":535309946880,"internally_reserved":0,"allocated":37552324608,"data_stored":37551381823,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":28786,"internal_metadata":6829281166},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":80,"apply_latency_ms":80,"commit_latency_ns":80000000,"apply_latency_ns":80000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":-58805018624,"num_objects":-49977,"num_object_clones":0,"num_object_copies":-149931,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":-49977,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":-2163708,"num_write_kb":-15633416,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"8.001048"},"pg_stats":[{"pgid":"1.0","version":"16'32","reported_seq":106,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:14.160209+0000","last_change":"2026-03-31T19:11:51.461485+0000","last_active":"2026-03-31T20:10:14.160209+0000","last_peered":"2026-03-31T20:10:14.160209+0000","last_clean":"2026-03-31T20:10:14.160209+0000","last_became_active":"2026-03-31T19:11:51.461431+0000","last_became_peered":"2026-03-31T19:11:51.461431+0000","last_unstale":"2026-03-31T20:10:14.160209+0000","last_undegraded":"2026-03-31T20:10:14.160209+0000","last_fullsized":"2026-03-31T20:10:14.160209+0000","mapping_epoch":15,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":16,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:47.437529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:47.437529+0000","last_clean_scrub_stamp":"2026-03-31T19:11:47.437529+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T01:50:30.040955+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2],"acting":[5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.3","version":"17'1","reported_seq":65,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:34.220480+0000","last_change":"2026-03-31T19:11:54.469647+0000","last_active":"2026-03-31T20:10:34.220480+0000","last_peered":"2026-03-31T20:10:34.220480+0000","last_clean":"2026-03-31T20:10:34.220480+0000","last_became_active":"2026-03-31T19:11:52.462938+0000","last_became_peered":"2026-03-31T19:11:52.462938+0000","last_unstale":"2026-03-31T20:10:34.220480+0000","last_undegraded":"2026-03-31T20:10:34.220480+0000","last_fullsized":"2026-03-31T20:10:34.220480+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T04:27:38.658998+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00011541599999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":65,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:14.160205+0000","last_change":"2026-03-31T19:11:54.469597+0000","last_active":"2026-03-31T20:10:14.160205+0000","last_peered":"2026-03-31T20:10:14.160205+0000","last_clean":"2026-03-31T20:10:14.160205+0000","last_became_active":"2026-03-31T19:11:52.462306+0000","last_became_peered":"2026-03-31T19:11:52.462306+0000","last_unstale":"2026-03-31T20:10:14.160205+0000","last_undegraded":"2026-03-31T20:10:14.160205+0000","last_fullsized":"2026-03-31T20:10:14.160205+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T02:18:16.715821+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000113282,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3],"acting":[5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":64,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:19.569354+0000","last_change":"2026-03-31T19:11:54.478837+0000","last_active":"2026-03-31T20:10:19.569354+0000","last_peered":"2026-03-31T20:10:19.569354+0000","last_clean":"2026-03-31T20:10:19.569354+0000","last_became_active":"2026-03-31T19:11:52.471581+0000","last_became_peered":"2026-03-31T19:11:52.471581+0000","last_unstale":"2026-03-31T20:10:19.569354+0000","last_undegraded":"2026-03-31T20:10:19.569354+0000","last_fullsized":"2026-03-31T20:10:19.569354+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T03:14:45.480524+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00020954200000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.2","version":"19'2","reported_seq":66,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:34.220474+0000","last_change":"2026-03-31T19:11:54.469656+0000","last_active":"2026-03-31T20:10:34.220474+0000","last_peered":"2026-03-31T20:10:34.220474+0000","last_clean":"2026-03-31T20:10:34.220474+0000","last_became_active":"2026-03-31T19:11:52.462970+0000","last_became_peered":"2026-03-31T19:11:52.462970+0000","last_unstale":"2026-03-31T20:10:34.220474+0000","last_undegraded":"2026-03-31T20:10:34.220474+0000","last_fullsized":"2026-03-31T20:10:34.220474+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-01T20:18:53.640113+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000174816,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":64,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:29.625548+0000","last_change":"2026-03-31T19:11:54.478813+0000","last_active":"2026-03-31T20:10:29.625548+0000","last_peered":"2026-03-31T20:10:29.625548+0000","last_clean":"2026-03-31T20:10:29.625548+0000","last_became_active":"2026-03-31T19:11:52.471594+0000","last_became_peered":"2026-03-31T19:11:52.471594+0000","last_unstale":"2026-03-31T20:10:29.625548+0000","last_undegraded":"2026-03-31T20:10:29.625548+0000","last_fullsized":"2026-03-31T20:10:29.625548+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T01:51:42.337886+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00019145800000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":66,"reported_epoch":28,"state":"active+clean","last_fresh":"2026-03-31T20:14:10.896452+0000","last_change":"2026-03-31T19:11:55.268223+0000","last_active":"2026-03-31T20:14:10.896452+0000","last_peered":"2026-03-31T20:14:10.896452+0000","last_clean":"2026-03-31T20:14:10.896452+0000","last_became_active":"2026-03-31T19:11:52.468346+0000","last_became_peered":"2026-03-31T19:11:52.468346+0000","last_unstale":"2026-03-31T20:14:10.896452+0000","last_undegraded":"2026-03-31T20:14:10.896452+0000","last_fullsized":"2026-03-31T20:14:10.896452+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T04:57:18.273386+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000138889,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0],"acting":[3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"0'0","reported_seq":63,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:24.579283+0000","last_change":"2026-03-31T19:11:54.478711+0000","last_active":"2026-03-31T20:10:24.579283+0000","last_peered":"2026-03-31T20:10:24.579283+0000","last_clean":"2026-03-31T20:10:24.579283+0000","last_became_active":"2026-03-31T19:11:52.470925+0000","last_became_peered":"2026-03-31T19:11:52.470925+0000","last_unstale":"2026-03-31T20:10:24.579283+0000","last_undegraded":"2026-03-31T20:10:24.579283+0000","last_fullsized":"2026-03-31T20:10:24.579283+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T00:50:09.152646+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000115977,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2],"acting":[1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":67,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:11.909414+0000","last_change":"2026-03-31T19:11:55.935961+0000","last_active":"2026-03-31T20:14:11.909414+0000","last_peered":"2026-03-31T20:14:11.909414+0000","last_clean":"2026-03-31T20:14:11.909414+0000","last_became_active":"2026-03-31T19:11:52.464385+0000","last_became_peered":"2026-03-31T19:11:52.464385+0000","last_unstale":"2026-03-31T20:14:11.909414+0000","last_undegraded":"2026-03-31T20:14:11.909414+0000","last_fullsized":"2026-03-31T20:14:11.909414+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T03:10:37.606853+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000107432,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2],"acting":[4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":6},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":864256,"data_stored":857120,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":2,"acting":2,"num_store_stats":3}],"osd_stats":[{"osd":5,"up_from":14,"seq":60129542895,"num_pgs":11,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":13199640,"kb_used_data":12594824,"kb_used_omap":4,"kb_used_meta":604795,"kb_avail":81172200,"statfs":{"total":96636764160,"available":83120332800,"internally_reserved":0,"allocated":12897099776,"data_stored":12896941343,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4472,"internal_metadata":619310728},"hb_peers":[0,1,2,3,4],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":12,"seq":51539608303,"num_pgs":10,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":9185340,"kb_used_data":8572404,"kb_used_omap":5,"kb_used_meta":569146,"kb_avail":85186500,"statfs":{"total":96636764160,"available":87230976000,"internally_reserved":0,"allocated":8778141696,"data_stored":8777983319,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5126,"internal_metadata":582806522},"hb_peers":[1,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":26,"apply_latency_ms":26,"commit_latency_ns":26000000,"apply_latency_ns":26000000},"alerts":[]},{"osd":4,"up_from":11,"seq":47244641007,"num_pgs":10,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5403048,"kb_used_data":4456016,"kb_used_omap":5,"kb_used_meta":947002,"kb_avail":88968792,"statfs":{"total":96636764160,"available":91104043008,"internally_reserved":0,"allocated":4562960384,"data_stored":4562802007,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5123,"internal_metadata":969731069},"hb_peers":[0,1,2,3,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":11,"apply_latency_ms":11,"commit_latency_ns":11000000,"apply_latency_ns":11000000},"alerts":[]},{"osd":3,"up_from":11,"seq":47244641005,"num_pgs":11,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5276160,"kb_used_data":3567676,"kb_used_omap":5,"kb_used_meta":1626810,"kb_avail":89095680,"statfs":{"total":96636764160,"available":91233976320,"internally_reserved":0,"allocated":3653300224,"data_stored":3653141847,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5122,"internal_metadata":1665854462},"hb_peers":[0,1,2,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":26,"apply_latency_ms":26,"commit_latency_ns":26000000,"apply_latency_ns":26000000},"alerts":[]},{"osd":2,"up_from":11,"seq":47244641007,"num_pgs":11,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5278044,"kb_used_data":3745264,"kb_used_omap":4,"kb_used_meta":1532731,"kb_avail":89093796,"statfs":{"total":96636764160,"available":91232047104,"internally_reserved":0,"allocated":3835150336,"data_stored":3834995980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4472,"internal_metadata":1569517192},"hb_peers":[0,1,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":17,"apply_latency_ms":17,"commit_latency_ns":17000000,"apply_latency_ns":17000000},"alerts":[]},{"osd":1,"up_from":11,"seq":47244641007,"num_pgs":13,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5125188,"kb_used_data":3736008,"kb_used_omap":4,"kb_used_meta":1388731,"kb_avail":89246652,"statfs":{"total":96636764160,"available":91388571648,"internally_reserved":0,"allocated":3825672192,"data_stored":3825517327,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4471,"internal_metadata":1422061193},"hb_peers":[0,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-31T20:14:13.788 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-31T20:14:13.950 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T20:14:13.950 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":29,"fsid":"0c165360-d647-4244-b40a-096067707e1d","created":"2026-03-31T19:11:42.338985+0000","modified":"2026-03-31T20:14:11.900050+0000","last_up_change":"2026-03-31T19:11:49.445731+0000","last_in_change":"2026-03-31T19:11:43.426008+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":3,"max_osd":6,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-31T19:11:46.614367+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"17","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":6.059999942779541,"score_stable":6.059999942779541,"optimal_score":0.33000001311302185,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"rbd","create_time":"2026-03-31T19:11:51.217694+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"19","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":19,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.25,"score_stable":2.25,"optimal_score":1,"raw_score_acting":2.25,"raw_score_stable":2.25,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"8e42da84-9085-490f-bcb2-12a7601715cc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":12,"up_thru":25,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6804","nonce":1584285308}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6805","nonce":1584285308}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6807","nonce":1584285308}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6806","nonce":1584285308}]},"public_addr":"192.168.123.103:6804/1584285308","cluster_addr":"192.168.123.103:6805/1584285308","heartbeat_back_addr":"192.168.123.103:6807/1584285308","heartbeat_front_addr":"192.168.123.103:6806/1584285308","state":["exists","up"]},{"osd":1,"uuid":"c3e0761c-df20-4f27-aa02-67cba4a367a0","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":25,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6800","nonce":2764498957}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6801","nonce":2764498957}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6803","nonce":2764498957}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.103:6802","nonce":2764498957}]},"public_addr":"192.168.123.103:6800/2764498957","cluster_addr":"192.168.123.103:6801/2764498957","heartbeat_back_addr":"192.168.123.103:6803/2764498957","heartbeat_front_addr":"192.168.123.103:6802/2764498957","state":["exists","up"]},{"osd":2,"uuid":"33de56d5-999a-4940-9d6b-bd0e44b33124","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":25,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6800","nonce":473978242}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6801","nonce":473978242}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6803","nonce":473978242}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6802","nonce":473978242}]},"public_addr":"192.168.123.105:6800/473978242","cluster_addr":"192.168.123.105:6801/473978242","heartbeat_back_addr":"192.168.123.105:6803/473978242","heartbeat_front_addr":"192.168.123.105:6802/473978242","state":["exists","up"]},{"osd":3,"uuid":"9fdc304f-7cea-4331-b3c0-c11b69079ac4","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":25,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6804","nonce":457425259}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6805","nonce":457425259}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6807","nonce":457425259}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.105:6806","nonce":457425259}]},"public_addr":"192.168.123.105:6804/457425259","cluster_addr":"192.168.123.105:6805/457425259","heartbeat_back_addr":"192.168.123.105:6807/457425259","heartbeat_front_addr":"192.168.123.105:6806/457425259","state":["exists","up"]},{"osd":4,"uuid":"ead5f212-78f7-4fbe-971c-f5e7aafbfd46","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":11,"up_thru":25,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6800","nonce":682239721}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6801","nonce":682239721}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6803","nonce":682239721}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6802","nonce":682239721}]},"public_addr":"192.168.123.106:6800/682239721","cluster_addr":"192.168.123.106:6801/682239721","heartbeat_back_addr":"192.168.123.106:6803/682239721","heartbeat_front_addr":"192.168.123.106:6802/682239721","state":["exists","up"]},{"osd":5,"uuid":"453e4d44-fda0-456d-bc9d-f74967039030","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":14,"up_thru":25,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6804","nonce":146747963}]},"cluster_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6805","nonce":146747963}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6807","nonce":146747963}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v1","addr":"192.168.123.106:6806","nonce":146747963}]},"public_addr":"192.168.123.106:6804/146747963","cluster_addr":"192.168.123.106:6805/146747963","heartbeat_back_addr":"192.168.123.106:6807/146747963","heartbeat_front_addr":"192.168.123.106:6806/146747963","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.566317+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.300559+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.219411+0000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.277587+0000","dead_epoch":0},{"osd":4,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:45.236606+0000","dead_epoch":0},{"osd":5,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4541880224203014143,"old_weight":0,"last_purged_snaps_scrub":"2026-03-31T19:11:48.261855+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-31T20:14:14.965 INFO:tasks.ceph:Scrubbing osd.0 2026-03-31T20:14:14.965 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 config set osd_debug_deep_scrub_sleep 0 2026-03-31T20:14:15.047 INFO:teuthology.orchestra.run.vm01.stdout:{ 2026-03-31T20:14:15.047 INFO:teuthology.orchestra.run.vm01.stdout: "success": "osd_debug_deep_scrub_sleep = '' (not observed, change may require restart) " 2026-03-31T20:14:15.047 INFO:teuthology.orchestra.run.vm01.stdout:} 2026-03-31T20:14:15.060 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd deep-scrub 0 2026-03-31T20:14:15.228 INFO:teuthology.orchestra.run.vm01.stderr:instructed osd(s) 0 to deep-scrub 2026-03-31T20:14:15.244 INFO:tasks.ceph:Scrubbing osd.1 2026-03-31T20:14:15.244 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 config set osd_debug_deep_scrub_sleep 0 2026-03-31T20:14:15.323 INFO:teuthology.orchestra.run.vm01.stdout:{ 2026-03-31T20:14:15.323 INFO:teuthology.orchestra.run.vm01.stdout: "success": "osd_debug_deep_scrub_sleep = '' (not observed, change may require restart) " 2026-03-31T20:14:15.323 INFO:teuthology.orchestra.run.vm01.stdout:} 2026-03-31T20:14:15.334 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd deep-scrub 1 2026-03-31T20:14:15.505 INFO:teuthology.orchestra.run.vm01.stderr:instructed osd(s) 1 to deep-scrub 2026-03-31T20:14:15.521 INFO:tasks.ceph:Scrubbing osd.2 2026-03-31T20:14:15.521 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.2 config set osd_debug_deep_scrub_sleep 0 2026-03-31T20:14:15.605 INFO:teuthology.orchestra.run.vm01.stdout:{ 2026-03-31T20:14:15.605 INFO:teuthology.orchestra.run.vm01.stdout: "success": "osd_debug_deep_scrub_sleep = '' (not observed, change may require restart) " 2026-03-31T20:14:15.605 INFO:teuthology.orchestra.run.vm01.stdout:} 2026-03-31T20:14:15.617 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd deep-scrub 2 2026-03-31T20:14:15.786 INFO:teuthology.orchestra.run.vm01.stderr:instructed osd(s) 2 to deep-scrub 2026-03-31T20:14:15.802 INFO:tasks.ceph:Scrubbing osd.3 2026-03-31T20:14:15.802 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.3 config set osd_debug_deep_scrub_sleep 0 2026-03-31T20:14:15.880 INFO:teuthology.orchestra.run.vm01.stdout:{ 2026-03-31T20:14:15.880 INFO:teuthology.orchestra.run.vm01.stdout: "success": "osd_debug_deep_scrub_sleep = '' (not observed, change may require restart) " 2026-03-31T20:14:15.880 INFO:teuthology.orchestra.run.vm01.stdout:} 2026-03-31T20:14:15.891 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd deep-scrub 3 2026-03-31T20:14:16.057 INFO:teuthology.orchestra.run.vm01.stderr:instructed osd(s) 3 to deep-scrub 2026-03-31T20:14:16.070 INFO:tasks.ceph:Scrubbing osd.4 2026-03-31T20:14:16.070 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.4 config set osd_debug_deep_scrub_sleep 0 2026-03-31T20:14:16.147 INFO:teuthology.orchestra.run.vm01.stdout:{ 2026-03-31T20:14:16.147 INFO:teuthology.orchestra.run.vm01.stdout: "success": "osd_debug_deep_scrub_sleep = '' (not observed, change may require restart) " 2026-03-31T20:14:16.147 INFO:teuthology.orchestra.run.vm01.stdout:} 2026-03-31T20:14:16.156 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd deep-scrub 4 2026-03-31T20:14:16.312 INFO:teuthology.orchestra.run.vm01.stderr:instructed osd(s) 4 to deep-scrub 2026-03-31T20:14:16.325 INFO:tasks.ceph:Scrubbing osd.5 2026-03-31T20:14:16.325 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.5 config set osd_debug_deep_scrub_sleep 0 2026-03-31T20:14:16.397 INFO:teuthology.orchestra.run.vm01.stdout:{ 2026-03-31T20:14:16.397 INFO:teuthology.orchestra.run.vm01.stdout: "success": "osd_debug_deep_scrub_sleep = '' (not observed, change may require restart) " 2026-03-31T20:14:16.398 INFO:teuthology.orchestra.run.vm01.stdout:} 2026-03-31T20:14:16.407 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd deep-scrub 5 2026-03-31T20:14:16.558 INFO:teuthology.orchestra.run.vm01.stderr:instructed osd(s) 5 to deep-scrub 2026-03-31T20:14:16.570 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-31T20:14:16.720 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T20:14:16.720 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-31T20:14:16.732 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":1897,"stamp":"2026-03-31T20:14:15.223474+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459299,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":59,"num_write_kb":586,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":35,"ondisk_log_size":35,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":66,"num_osds":6,"num_per_pool_osds":6,"num_per_pool_omap_osds":6,"kb":566231040,"kb_used":43351720,"kb_used_data":36556492,"kb_used_omap":29,"kb_used_meta":6669218,"kb_avail":522879320,"statfs":{"total":579820584960,"available":535428423680,"internally_reserved":0,"allocated":37433847808,"data_stored":37432893130,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":30090,"internal_metadata":6829279862},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":91,"apply_latency_ms":91,"commit_latency_ns":91000000,"apply_latency_ns":91000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":-58805018624,"num_objects":-49977,"num_object_clones":0,"num_object_copies":-149931,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":-49977,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":-2163708,"num_write_kb":-15633416,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"8.001120"},"pg_stats":[{"pgid":"1.0","version":"16'32","reported_seq":111,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:11.979276+0000","last_change":"2026-03-31T19:11:51.461485+0000","last_active":"2026-03-31T20:14:11.979276+0000","last_peered":"2026-03-31T20:14:11.979276+0000","last_clean":"2026-03-31T20:14:11.979276+0000","last_became_active":"2026-03-31T19:11:51.461431+0000","last_became_peered":"2026-03-31T19:11:51.461431+0000","last_unstale":"2026-03-31T20:14:11.979276+0000","last_undegraded":"2026-03-31T20:14:11.979276+0000","last_fullsized":"2026-03-31T20:14:11.979276+0000","mapping_epoch":15,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":16,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:47.437529+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:47.437529+0000","last_clean_scrub_stamp":"2026-03-31T19:11:47.437529+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T01:50:30.040955+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2],"acting":[5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.3","version":"17'1","reported_seq":69,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:11.979268+0000","last_change":"2026-03-31T19:11:54.469647+0000","last_active":"2026-03-31T20:14:11.979268+0000","last_peered":"2026-03-31T20:14:11.979268+0000","last_clean":"2026-03-31T20:14:11.979268+0000","last_became_active":"2026-03-31T19:11:52.462938+0000","last_became_peered":"2026-03-31T19:11:52.462938+0000","last_unstale":"2026-03-31T20:14:11.979268+0000","last_undegraded":"2026-03-31T20:14:11.979268+0000","last_fullsized":"2026-03-31T20:14:11.979268+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T04:27:38.658998+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00011541599999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":70,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:11.979279+0000","last_change":"2026-03-31T19:11:54.469597+0000","last_active":"2026-03-31T20:14:11.979279+0000","last_peered":"2026-03-31T20:14:11.979279+0000","last_clean":"2026-03-31T20:14:11.979279+0000","last_became_active":"2026-03-31T19:11:52.462306+0000","last_became_peered":"2026-03-31T19:11:52.462306+0000","last_unstale":"2026-03-31T20:14:11.979279+0000","last_undegraded":"2026-03-31T20:14:11.979279+0000","last_fullsized":"2026-03-31T20:14:11.979279+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T02:18:16.715821+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000113282,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3],"acting":[5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":64,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:19.569354+0000","last_change":"2026-03-31T19:11:54.478837+0000","last_active":"2026-03-31T20:10:19.569354+0000","last_peered":"2026-03-31T20:10:19.569354+0000","last_clean":"2026-03-31T20:10:19.569354+0000","last_became_active":"2026-03-31T19:11:52.471581+0000","last_became_peered":"2026-03-31T19:11:52.471581+0000","last_unstale":"2026-03-31T20:10:19.569354+0000","last_undegraded":"2026-03-31T20:10:19.569354+0000","last_fullsized":"2026-03-31T20:10:19.569354+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T03:14:45.480524+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00020954200000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.2","version":"19'2","reported_seq":70,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:11.979268+0000","last_change":"2026-03-31T19:11:54.469656+0000","last_active":"2026-03-31T20:14:11.979268+0000","last_peered":"2026-03-31T20:14:11.979268+0000","last_clean":"2026-03-31T20:14:11.979268+0000","last_became_active":"2026-03-31T19:11:52.462970+0000","last_became_peered":"2026-03-31T19:11:52.462970+0000","last_unstale":"2026-03-31T20:14:11.979268+0000","last_undegraded":"2026-03-31T20:14:11.979268+0000","last_fullsized":"2026-03-31T20:14:11.979268+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-01T20:18:53.640113+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000174816,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":64,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:29.625548+0000","last_change":"2026-03-31T19:11:54.478813+0000","last_active":"2026-03-31T20:10:29.625548+0000","last_peered":"2026-03-31T20:10:29.625548+0000","last_clean":"2026-03-31T20:10:29.625548+0000","last_became_active":"2026-03-31T19:11:52.471594+0000","last_became_peered":"2026-03-31T19:11:52.471594+0000","last_unstale":"2026-03-31T20:10:29.625548+0000","last_undegraded":"2026-03-31T20:10:29.625548+0000","last_fullsized":"2026-03-31T20:10:29.625548+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T01:51:42.337886+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00019145800000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":66,"reported_epoch":28,"state":"active+clean","last_fresh":"2026-03-31T20:14:10.896452+0000","last_change":"2026-03-31T19:11:55.268223+0000","last_active":"2026-03-31T20:14:10.896452+0000","last_peered":"2026-03-31T20:14:10.896452+0000","last_clean":"2026-03-31T20:14:10.896452+0000","last_became_active":"2026-03-31T19:11:52.468346+0000","last_became_peered":"2026-03-31T19:11:52.468346+0000","last_unstale":"2026-03-31T20:14:10.896452+0000","last_undegraded":"2026-03-31T20:14:10.896452+0000","last_fullsized":"2026-03-31T20:14:10.896452+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T04:57:18.273386+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000138889,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0],"acting":[3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"0'0","reported_seq":63,"reported_epoch":27,"state":"active+clean","last_fresh":"2026-03-31T20:10:24.579283+0000","last_change":"2026-03-31T19:11:54.478711+0000","last_active":"2026-03-31T20:10:24.579283+0000","last_peered":"2026-03-31T20:10:24.579283+0000","last_clean":"2026-03-31T20:10:24.579283+0000","last_became_active":"2026-03-31T19:11:52.470925+0000","last_became_peered":"2026-03-31T19:11:52.470925+0000","last_unstale":"2026-03-31T20:10:24.579283+0000","last_undegraded":"2026-03-31T20:10:24.579283+0000","last_fullsized":"2026-03-31T20:10:24.579283+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T00:50:09.152646+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000115977,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2],"acting":[1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":67,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:11.909414+0000","last_change":"2026-03-31T19:11:55.935961+0000","last_active":"2026-03-31T20:14:11.909414+0000","last_peered":"2026-03-31T20:14:11.909414+0000","last_clean":"2026-03-31T20:14:11.909414+0000","last_became_active":"2026-03-31T19:11:52.464385+0000","last_became_peered":"2026-03-31T19:11:52.464385+0000","last_unstale":"2026-03-31T20:14:11.909414+0000","last_undegraded":"2026-03-31T20:14:11.909414+0000","last_fullsized":"2026-03-31T20:14:11.909414+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T19:11:51.454243+0000","last_clean_scrub_stamp":"2026-03-31T19:11:51.454243+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T03:10:37.606853+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000107432,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2],"acting":[4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":6},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":864256,"data_stored":857120,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":2,"acting":2,"num_store_stats":3}],"osd_stats":[{"osd":5,"up_from":14,"seq":60129542896,"num_pgs":11,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":13083940,"kb_used_data":12479124,"kb_used_omap":5,"kb_used_meta":604794,"kb_avail":81287900,"statfs":{"total":96636764160,"available":83238809600,"internally_reserved":0,"allocated":12778622976,"data_stored":12778452650,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5776,"internal_metadata":619309424},"hb_peers":[0,1,2,3,4],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":11,"apply_latency_ms":11,"commit_latency_ns":11000000,"apply_latency_ns":11000000},"alerts":[]},{"osd":0,"up_from":12,"seq":51539608303,"num_pgs":10,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":9185340,"kb_used_data":8572404,"kb_used_omap":5,"kb_used_meta":569146,"kb_avail":85186500,"statfs":{"total":96636764160,"available":87230976000,"internally_reserved":0,"allocated":8778141696,"data_stored":8777983319,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5126,"internal_metadata":582806522},"hb_peers":[1,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":26,"apply_latency_ms":26,"commit_latency_ns":26000000,"apply_latency_ns":26000000},"alerts":[]},{"osd":4,"up_from":11,"seq":47244641007,"num_pgs":10,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5403048,"kb_used_data":4456016,"kb_used_omap":5,"kb_used_meta":947002,"kb_avail":88968792,"statfs":{"total":96636764160,"available":91104043008,"internally_reserved":0,"allocated":4562960384,"data_stored":4562802007,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5123,"internal_metadata":969731069},"hb_peers":[0,1,2,3,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":11,"apply_latency_ms":11,"commit_latency_ns":11000000,"apply_latency_ns":11000000},"alerts":[]},{"osd":3,"up_from":11,"seq":47244641005,"num_pgs":11,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5276160,"kb_used_data":3567676,"kb_used_omap":5,"kb_used_meta":1626810,"kb_avail":89095680,"statfs":{"total":96636764160,"available":91233976320,"internally_reserved":0,"allocated":3653300224,"data_stored":3653141847,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5122,"internal_metadata":1665854462},"hb_peers":[0,1,2,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":26,"apply_latency_ms":26,"commit_latency_ns":26000000,"apply_latency_ns":26000000},"alerts":[]},{"osd":2,"up_from":11,"seq":47244641007,"num_pgs":11,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5278044,"kb_used_data":3745264,"kb_used_omap":4,"kb_used_meta":1532731,"kb_avail":89093796,"statfs":{"total":96636764160,"available":91232047104,"internally_reserved":0,"allocated":3835150336,"data_stored":3834995980,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4472,"internal_metadata":1569517192},"hb_peers":[0,1,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":17,"apply_latency_ms":17,"commit_latency_ns":17000000,"apply_latency_ns":17000000},"alerts":[]},{"osd":1,"up_from":11,"seq":47244641007,"num_pgs":13,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5125188,"kb_used_data":3736008,"kb_used_omap":4,"kb_used_meta":1388731,"kb_avail":89246652,"statfs":{"total":96636764160,"available":91388571648,"internally_reserved":0,"allocated":3825672192,"data_stored":3825517327,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4471,"internal_metadata":1422061193},"hb_peers":[0,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-31T20:14:16.733 INFO:tasks.ceph:pgid 1.0 last_scrub_stamp 2026-03-31T19:11:47.437529+0000 time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=19, tm_min=11, tm_sec=47, tm_wday=1, tm_yday=90, tm_isdst=-1) <= time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=20, tm_min=14, tm_sec=13, tm_wday=1, tm_yday=90, tm_isdst=0) 2026-03-31T20:14:16.733 INFO:tasks.ceph:pgid 2.3 last_scrub_stamp 2026-03-31T19:11:51.454243+0000 time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=19, tm_min=11, tm_sec=51, tm_wday=1, tm_yday=90, tm_isdst=-1) <= time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=20, tm_min=14, tm_sec=13, tm_wday=1, tm_yday=90, tm_isdst=0) 2026-03-31T20:14:16.733 INFO:tasks.ceph:pgid 2.0 last_scrub_stamp 2026-03-31T19:11:51.454243+0000 time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=19, tm_min=11, tm_sec=51, tm_wday=1, tm_yday=90, tm_isdst=-1) <= time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=20, tm_min=14, tm_sec=13, tm_wday=1, tm_yday=90, tm_isdst=0) 2026-03-31T20:14:16.733 INFO:tasks.ceph:pgid 2.1 last_scrub_stamp 2026-03-31T19:11:51.454243+0000 time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=19, tm_min=11, tm_sec=51, tm_wday=1, tm_yday=90, tm_isdst=-1) <= time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=20, tm_min=14, tm_sec=13, tm_wday=1, tm_yday=90, tm_isdst=0) 2026-03-31T20:14:16.733 INFO:tasks.ceph:pgid 2.2 last_scrub_stamp 2026-03-31T19:11:51.454243+0000 time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=19, tm_min=11, tm_sec=51, tm_wday=1, tm_yday=90, tm_isdst=-1) <= time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=20, tm_min=14, tm_sec=13, tm_wday=1, tm_yday=90, tm_isdst=0) 2026-03-31T20:14:16.733 INFO:tasks.ceph:pgid 2.4 last_scrub_stamp 2026-03-31T19:11:51.454243+0000 time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=19, tm_min=11, tm_sec=51, tm_wday=1, tm_yday=90, tm_isdst=-1) <= time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=20, tm_min=14, tm_sec=13, tm_wday=1, tm_yday=90, tm_isdst=0) 2026-03-31T20:14:16.733 INFO:tasks.ceph:pgid 2.5 last_scrub_stamp 2026-03-31T19:11:51.454243+0000 time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=19, tm_min=11, tm_sec=51, tm_wday=1, tm_yday=90, tm_isdst=-1) <= time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=20, tm_min=14, tm_sec=13, tm_wday=1, tm_yday=90, tm_isdst=0) 2026-03-31T20:14:16.733 INFO:tasks.ceph:pgid 2.6 last_scrub_stamp 2026-03-31T19:11:51.454243+0000 time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=19, tm_min=11, tm_sec=51, tm_wday=1, tm_yday=90, tm_isdst=-1) <= time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=20, tm_min=14, tm_sec=13, tm_wday=1, tm_yday=90, tm_isdst=0) 2026-03-31T20:14:16.733 INFO:tasks.ceph:pgid 2.7 last_scrub_stamp 2026-03-31T19:11:51.454243+0000 time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=19, tm_min=11, tm_sec=51, tm_wday=1, tm_yday=90, tm_isdst=-1) <= time.struct_time(tm_year=2026, tm_mon=3, tm_mday=31, tm_hour=20, tm_min=14, tm_sec=13, tm_wday=1, tm_yday=90, tm_isdst=0) 2026-03-31T20:14:16.733 INFO:tasks.ceph:Still waiting for all pgs to be scrubbed. 2026-03-31T20:14:36.734 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-31T20:14:36.889 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-31T20:14:36.889 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-31T20:14:36.902 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":1907,"stamp":"2026-03-31T20:14:35.226092+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459299,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":59,"num_write_kb":586,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":39,"ondisk_log_size":39,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":42,"num_osds":6,"num_per_pool_osds":6,"num_per_pool_omap_osds":6,"kb":566231040,"kb_used":26635552,"kb_used_data":19875968,"kb_used_omap":33,"kb_used_meta":6759454,"kb_avail":539595488,"statfs":{"total":579820584960,"available":552545779712,"internally_reserved":0,"allocated":20352991232,"data_stored":20351999856,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":34654,"internal_metadata":6921681058},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":46,"apply_latency_ms":46,"commit_latency_ns":46000000,"apply_latency_ns":46000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001521"},"pg_stats":[{"pgid":"1.0","version":"29'34","reported_seq":121,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:19.535716+0000","last_change":"2026-03-31T20:14:19.535716+0000","last_active":"2026-03-31T20:14:19.535716+0000","last_peered":"2026-03-31T20:14:19.535716+0000","last_clean":"2026-03-31T20:14:19.535716+0000","last_became_active":"2026-03-31T19:11:51.461431+0000","last_became_peered":"2026-03-31T19:11:51.461431+0000","last_unstale":"2026-03-31T20:14:19.535716+0000","last_undegraded":"2026-03-31T20:14:19.535716+0000","last_fullsized":"2026-03-31T20:14:19.535716+0000","mapping_epoch":15,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":16,"parent":"0.0","parent_split_bits":0,"last_scrub":"29'34","last_scrub_stamp":"2026-03-31T20:14:19.535686+0000","last_deep_scrub":"29'34","last_deep_scrub_stamp":"2026-03-31T20:14:19.535686+0000","last_clean_scrub_stamp":"2026-03-31T20:14:19.535686+0000","objects_scrubbed":2,"log_size":34,"log_dups_size":0,"ondisk_log_size":34,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":1,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T01:46:33.299711+0000","scrub_duration":5,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,2],"acting":[5,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.3","version":"29'2","reported_seq":78,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:18.493269+0000","last_change":"2026-03-31T20:14:18.493269+0000","last_active":"2026-03-31T20:14:18.493269+0000","last_peered":"2026-03-31T20:14:18.493269+0000","last_clean":"2026-03-31T20:14:18.493269+0000","last_became_active":"2026-03-31T19:11:52.462938+0000","last_became_peered":"2026-03-31T19:11:52.462938+0000","last_unstale":"2026-03-31T20:14:18.493269+0000","last_undegraded":"2026-03-31T20:14:18.493269+0000","last_fullsized":"2026-03-31T20:14:18.493269+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"29'2","last_scrub_stamp":"2026-03-31T20:14:18.493246+0000","last_deep_scrub":"29'2","last_deep_scrub_stamp":"2026-03-31T20:14:18.493246+0000","last_clean_scrub_stamp":"2026-03-31T20:14:18.493246+0000","objects_scrubbed":1,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T07:04:46.654778+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00011541599999999999,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.0","version":"0'0","reported_seq":78,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:20.559009+0000","last_change":"2026-03-31T20:14:20.559009+0000","last_active":"2026-03-31T20:14:20.559009+0000","last_peered":"2026-03-31T20:14:20.559009+0000","last_clean":"2026-03-31T20:14:20.559009+0000","last_became_active":"2026-03-31T19:11:52.462306+0000","last_became_peered":"2026-03-31T19:11:52.462306+0000","last_unstale":"2026-03-31T20:14:20.559009+0000","last_undegraded":"2026-03-31T20:14:20.559009+0000","last_fullsized":"2026-03-31T20:14:20.559009+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T20:14:20.558975+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T20:14:20.558975+0000","last_clean_scrub_stamp":"2026-03-31T20:14:20.558975+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":1,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T03:27:17.668076+0000","scrub_duration":5,"objects_trimmed":0,"snaptrim_duration":0.000113282,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,3],"acting":[5,3],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.1","version":"0'0","reported_seq":77,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:15.535561+0000","last_change":"2026-03-31T20:14:15.535186+0000","last_active":"2026-03-31T20:14:15.535561+0000","last_peered":"2026-03-31T20:14:15.535561+0000","last_clean":"2026-03-31T20:14:15.535561+0000","last_became_active":"2026-03-31T19:11:52.471581+0000","last_became_peered":"2026-03-31T19:11:52.471581+0000","last_unstale":"2026-03-31T20:14:15.535561+0000","last_undegraded":"2026-03-31T20:14:15.535561+0000","last_fullsized":"2026-03-31T20:14:15.535561+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T20:14:15.535144+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T20:14:15.535144+0000","last_clean_scrub_stamp":"2026-03-31T20:14:15.535144+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T04:55:34.519130+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00020954200000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.2","version":"29'3","reported_seq":79,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:17.486411+0000","last_change":"2026-03-31T20:14:17.486411+0000","last_active":"2026-03-31T20:14:17.486411+0000","last_peered":"2026-03-31T20:14:17.486411+0000","last_clean":"2026-03-31T20:14:17.486411+0000","last_became_active":"2026-03-31T19:11:52.462970+0000","last_became_peered":"2026-03-31T19:11:52.462970+0000","last_unstale":"2026-03-31T20:14:17.486411+0000","last_undegraded":"2026-03-31T20:14:17.486411+0000","last_fullsized":"2026-03-31T20:14:17.486411+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"29'3","last_scrub_stamp":"2026-03-31T20:14:17.486349+0000","last_deep_scrub":"29'3","last_deep_scrub_stamp":"2026-03-31T20:14:17.486349+0000","last_clean_scrub_stamp":"2026-03-31T20:14:17.486349+0000","objects_scrubbed":1,"log_size":3,"log_dups_size":0,"ondisk_log_size":3,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-01T21:52:50.036821+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000174816,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[5,1],"acting":[5,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":5,"acting_primary":5,"purged_snaps":[]},{"pgid":"2.4","version":"0'0","reported_seq":76,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:16.489083+0000","last_change":"2026-03-31T20:14:16.489083+0000","last_active":"2026-03-31T20:14:16.489083+0000","last_peered":"2026-03-31T20:14:16.489083+0000","last_clean":"2026-03-31T20:14:16.489083+0000","last_became_active":"2026-03-31T19:11:52.471594+0000","last_became_peered":"2026-03-31T19:11:52.471594+0000","last_unstale":"2026-03-31T20:14:16.489083+0000","last_undegraded":"2026-03-31T20:14:16.489083+0000","last_fullsized":"2026-03-31T20:14:16.489083+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T20:14:16.488957+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T20:14:16.488957+0000","last_clean_scrub_stamp":"2026-03-31T20:14:16.488957+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T00:51:59.320211+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00019145800000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.5","version":"0'0","reported_seq":76,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:16.748190+0000","last_change":"2026-03-31T20:14:16.748190+0000","last_active":"2026-03-31T20:14:16.748190+0000","last_peered":"2026-03-31T20:14:16.748190+0000","last_clean":"2026-03-31T20:14:16.748190+0000","last_became_active":"2026-03-31T19:11:52.468346+0000","last_became_peered":"2026-03-31T19:11:52.468346+0000","last_unstale":"2026-03-31T20:14:16.748190+0000","last_undegraded":"2026-03-31T20:14:16.748190+0000","last_fullsized":"2026-03-31T20:14:16.748190+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T20:14:16.748155+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T20:14:16.748155+0000","last_clean_scrub_stamp":"2026-03-31T20:14:16.748155+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T06:16:25.626397+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000138889,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0],"acting":[3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]},{"pgid":"2.6","version":"0'0","reported_seq":75,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:17.529296+0000","last_change":"2026-03-31T20:14:17.529296+0000","last_active":"2026-03-31T20:14:17.529296+0000","last_peered":"2026-03-31T20:14:17.529296+0000","last_clean":"2026-03-31T20:14:17.529296+0000","last_became_active":"2026-03-31T19:11:52.470925+0000","last_became_peered":"2026-03-31T19:11:52.470925+0000","last_unstale":"2026-03-31T20:14:17.529296+0000","last_undegraded":"2026-03-31T20:14:17.529296+0000","last_fullsized":"2026-03-31T20:14:17.529296+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T20:14:17.529255+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T20:14:17.529255+0000","last_clean_scrub_stamp":"2026-03-31T20:14:17.529255+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-01T23:02:42.255582+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000115977,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2],"acting":[1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]},{"pgid":"2.7","version":"0'0","reported_seq":75,"reported_epoch":29,"state":"active+clean","last_fresh":"2026-03-31T20:14:16.626389+0000","last_change":"2026-03-31T20:14:16.626389+0000","last_active":"2026-03-31T20:14:16.626389+0000","last_peered":"2026-03-31T20:14:16.626389+0000","last_clean":"2026-03-31T20:14:16.626389+0000","last_became_active":"2026-03-31T19:11:52.464385+0000","last_became_peered":"2026-03-31T19:11:52.464385+0000","last_unstale":"2026-03-31T20:14:16.626389+0000","last_undegraded":"2026-03-31T20:14:16.626389+0000","last_fullsized":"2026-03-31T20:14:16.626389+0000","mapping_epoch":16,"log_start":"0'0","ondisk_log_start":"0'0","created":16,"last_epoch_clean":17,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-31T20:14:16.626344+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-31T20:14:16.626344+0000","last_clean_scrub_stamp":"2026-03-31T20:14:16.626344+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-04-02T05:11:34.996478+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000107432,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[4,2],"acting":[4,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":4,"acting_primary":4,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":5,"ondisk_log_size":5,"up":16,"acting":16,"num_store_stats":6},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":864256,"data_stored":857120,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":34,"ondisk_log_size":34,"up":2,"acting":2,"num_store_stats":3}],"osd_stats":[{"osd":5,"up_from":14,"seq":60129542900,"num_pgs":11,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":12600848,"kb_used_data":11996032,"kb_used_omap":5,"kb_used_meta":604794,"kb_avail":81770992,"statfs":{"total":96636764160,"available":83733495808,"internally_reserved":0,"allocated":12283936768,"data_stored":12283766442,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5776,"internal_metadata":619309424},"hb_peers":[0,1,2,3,4],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":19,"apply_latency_ms":19,"commit_latency_ns":19000000,"apply_latency_ns":19000000},"alerts":[]},{"osd":0,"up_from":12,"seq":51539608307,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":587472,"kb_used_data":256,"kb_used_omap":5,"kb_used_meta":587194,"kb_avail":93784368,"statfs":{"total":96636764160,"available":96035192832,"internally_reserved":0,"allocated":262144,"data_stored":99463,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5778,"internal_metadata":601287022},"hb_peers":[1,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":4,"up_from":11,"seq":47244641011,"num_pgs":10,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":5234440,"kb_used_data":4287416,"kb_used_omap":5,"kb_used_meta":947002,"kb_avail":89137400,"statfs":{"total":96636764160,"available":91276697600,"internally_reserved":0,"allocated":4390313984,"data_stored":4390151303,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5775,"internal_metadata":969730417},"hb_peers":[0,1,2,3,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":16,"apply_latency_ms":16,"commit_latency_ns":16000000,"apply_latency_ns":16000000},"alerts":[]},{"osd":3,"up_from":11,"seq":47244641009,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":1681232,"kb_used_data":256,"kb_used_omap":5,"kb_used_meta":1680954,"kb_avail":92690608,"statfs":{"total":96636764160,"available":94915182592,"internally_reserved":0,"allocated":262144,"data_stored":99463,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5774,"internal_metadata":1721297266},"hb_peers":[0,1,2,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":11,"seq":47244641011,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":1551448,"kb_used_data":648,"kb_used_omap":5,"kb_used_meta":1550778,"kb_avail":92820392,"statfs":{"total":96636764160,"available":95048081408,"internally_reserved":0,"allocated":663552,"data_stored":497303,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5776,"internal_metadata":1587997040},"hb_peers":[0,1,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":11,"seq":47244641011,"num_pgs":13,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":4980112,"kb_used_data":3591360,"kb_used_omap":5,"kb_used_meta":1388730,"kb_avail":89391728,"statfs":{"total":96636764160,"available":91537129472,"internally_reserved":0,"allocated":3677552640,"data_stored":3677385882,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":5775,"internal_metadata":1422059889},"hb_peers":[0,2,3,4,5],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":11,"apply_latency_ms":11,"commit_latency_ns":11000000,"apply_latency_ns":11000000},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":401408,"data_stored":397840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":4,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":5,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-31T20:14:36.903 DEBUG:teuthology.orchestra.run.vm01:> sudo ceph --cluster ceph config set global mon_health_to_clog false 2026-03-31T20:14:37.069 INFO:teuthology.misc:Shutting down mds daemons... 2026-03-31T20:14:37.069 INFO:teuthology.misc:Shutting down osd daemons... 2026-03-31T20:14:37.069 DEBUG:tasks.ceph.osd.0:waiting for process to exit 2026-03-31T20:14:37.069 INFO:teuthology.orchestra.run:waiting for 300 2026-03-31T20:14:37.146 INFO:tasks.ceph.osd.0:Stopped 2026-03-31T20:14:37.146 DEBUG:tasks.ceph.osd.1:waiting for process to exit 2026-03-31T20:14:37.147 INFO:teuthology.orchestra.run:waiting for 300 2026-03-31T20:14:42.934 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.1 is failed for ~0s 2026-03-31T20:14:43.248 INFO:tasks.ceph.osd.1:Stopped 2026-03-31T20:14:43.248 DEBUG:tasks.ceph.osd.2:waiting for process to exit 2026-03-31T20:14:43.248 INFO:teuthology.orchestra.run:waiting for 300 2026-03-31T20:14:48.437 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.2 is failed for ~0s 2026-03-31T20:14:48.437 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.1 has been restored 2026-03-31T20:14:49.349 INFO:tasks.ceph.osd.2:Stopped 2026-03-31T20:14:49.349 DEBUG:tasks.ceph.osd.3:waiting for process to exit 2026-03-31T20:14:49.349 INFO:teuthology.orchestra.run:waiting for 300 2026-03-31T20:14:49.449 INFO:tasks.ceph.osd.3:Stopped 2026-03-31T20:14:49.449 DEBUG:tasks.ceph.osd.4:waiting for process to exit 2026-03-31T20:14:49.449 INFO:teuthology.orchestra.run:waiting for 300 2026-03-31T20:14:49.544 INFO:tasks.ceph.osd.4:Stopped 2026-03-31T20:14:49.544 DEBUG:tasks.ceph.osd.5:waiting for process to exit 2026-03-31T20:14:49.544 INFO:teuthology.orchestra.run:waiting for 300 2026-03-31T20:14:49.620 INFO:tasks.ceph.osd.5:Stopped 2026-03-31T20:14:49.620 INFO:teuthology.misc:Shutting down mgr daemons... 2026-03-31T20:14:49.620 DEBUG:tasks.ceph.mgr.x:waiting for process to exit 2026-03-31T20:14:49.620 INFO:teuthology.orchestra.run:waiting for 300 2026-03-31T20:14:49.647 INFO:tasks.ceph.mgr.x:Stopped 2026-03-31T20:14:49.647 INFO:teuthology.misc:Shutting down mon daemons... 2026-03-31T20:14:49.647 DEBUG:tasks.ceph.mon.a:waiting for process to exit 2026-03-31T20:14:49.647 INFO:teuthology.orchestra.run:waiting for 300 2026-03-31T20:14:49.700 INFO:tasks.ceph.mon.a:Stopped 2026-03-31T20:14:49.700 INFO:tasks.ceph:Checking cluster log for badness... 2026-03-31T20:14:49.700 DEBUG:teuthology.orchestra.run.vm01:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/ceph.log | egrep -v '\(POOL_APP_NOT_ENABLED\)' | egrep -v '\(OSDMAP_FLAGS\)' | egrep -v '\(OSD_' | egrep -v '\(OBJECT_' | egrep -v '\(PG_' | egrep -v '\(SLOW_OPS\)' | egrep -v 'overall HEALTH' | egrep -v 'slow request' | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v '\(OSD_SLOW_PING_TIME' | egrep -v '\(MON_DOWN\)' | head -n 1 2026-03-31T20:14:49.749 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-0 on ubuntu@vm03.local 2026-03-31T20:14:49.749 DEBUG:teuthology.orchestra.run.vm03:> sync && sudo umount -f /var/lib/ceph/osd/ceph-0 2026-03-31T20:14:49.901 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-1 on ubuntu@vm03.local 2026-03-31T20:14:49.901 DEBUG:teuthology.orchestra.run.vm03:> sync && sudo umount -f /var/lib/ceph/osd/ceph-1 2026-03-31T20:14:50.371 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-2 on ubuntu@vm05.local 2026-03-31T20:14:50.372 DEBUG:teuthology.orchestra.run.vm05:> sync && sudo umount -f /var/lib/ceph/osd/ceph-2 2026-03-31T20:14:50.779 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-3 on ubuntu@vm05.local 2026-03-31T20:14:50.779 DEBUG:teuthology.orchestra.run.vm05:> sync && sudo umount -f /var/lib/ceph/osd/ceph-3 2026-03-31T20:14:51.249 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-4 on ubuntu@vm06.local 2026-03-31T20:14:51.249 DEBUG:teuthology.orchestra.run.vm06:> sync && sudo umount -f /var/lib/ceph/osd/ceph-4 2026-03-31T20:14:51.741 INFO:tasks.ceph:Unmounting /var/lib/ceph/osd/ceph-5 on ubuntu@vm06.local 2026-03-31T20:14:51.741 DEBUG:teuthology.orchestra.run.vm06:> sync && sudo umount -f /var/lib/ceph/osd/ceph-5 2026-03-31T20:14:51.896 INFO:tasks.ceph:Archiving mon data... 2026-03-31T20:14:51.896 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/lib/ceph/mon/ceph-a to /archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340/data/mon.a.tgz 2026-03-31T20:14:51.897 DEBUG:teuthology.orchestra.run.vm01:> mktemp 2026-03-31T20:14:51.900 INFO:teuthology.orchestra.run.vm01.stdout:/tmp/tmp.wBnbq4qkij 2026-03-31T20:14:51.900 DEBUG:teuthology.orchestra.run.vm01:> sudo tar cz -f - -C /var/lib/ceph/mon/ceph-a -- . > /tmp/tmp.wBnbq4qkij 2026-03-31T20:14:51.980 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 0666 /tmp/tmp.wBnbq4qkij 2026-03-31T20:14:52.073 DEBUG:teuthology.orchestra.remote:vm01:/tmp/tmp.wBnbq4qkij is 296KB 2026-03-31T20:14:52.122 DEBUG:teuthology.orchestra.run.vm01:> rm -fr /tmp/tmp.wBnbq4qkij 2026-03-31T20:14:52.124 INFO:tasks.ceph:Cleaning ceph cluster... 2026-03-31T20:14:52.124 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -rf -- /etc/ceph/ceph.conf /etc/ceph/ceph.keyring /home/ubuntu/cephtest/ceph.data /home/ubuntu/cephtest/ceph.monmap /home/ubuntu/cephtest/../*.pid 2026-03-31T20:14:52.165 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -rf -- /etc/ceph/ceph.conf /etc/ceph/ceph.keyring /home/ubuntu/cephtest/ceph.data /home/ubuntu/cephtest/ceph.monmap /home/ubuntu/cephtest/../*.pid 2026-03-31T20:14:52.167 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -rf -- /etc/ceph/ceph.conf /etc/ceph/ceph.keyring /home/ubuntu/cephtest/ceph.data /home/ubuntu/cephtest/ceph.monmap /home/ubuntu/cephtest/../*.pid 2026-03-31T20:14:52.168 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -rf -- /etc/ceph/ceph.conf /etc/ceph/ceph.keyring /home/ubuntu/cephtest/ceph.data /home/ubuntu/cephtest/ceph.monmap /home/ubuntu/cephtest/../*.pid 2026-03-31T20:14:52.216 INFO:teuthology.util.scanner:summary_data or yaml_file is empty! 2026-03-31T20:14:52.221 INFO:teuthology.util.scanner:summary_data or yaml_file is empty! 2026-03-31T20:14:52.225 INFO:teuthology.util.scanner:summary_data or yaml_file is empty! 2026-03-31T20:14:52.229 INFO:teuthology.util.scanner:summary_data or yaml_file is empty! 2026-03-31T20:14:52.229 INFO:tasks.ceph:Archiving crash dumps... 2026-03-31T20:14:52.229 DEBUG:teuthology.misc:Transferring archived files from vm01:/var/lib/ceph/crash to /archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340/remote/vm01/crash 2026-03-31T20:14:52.229 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /var/lib/ceph/crash -- . 2026-03-31T20:14:52.263 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/lib/ceph/crash to /archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340/remote/vm03/crash 2026-03-31T20:14:52.264 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/lib/ceph/crash -- . 2026-03-31T20:14:52.271 DEBUG:teuthology.misc:Transferring archived files from vm05:/var/lib/ceph/crash to /archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340/remote/vm05/crash 2026-03-31T20:14:52.271 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /var/lib/ceph/crash -- . 2026-03-31T20:14:52.279 DEBUG:teuthology.misc:Transferring archived files from vm06:/var/lib/ceph/crash to /archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340/remote/vm06/crash 2026-03-31T20:14:52.279 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /var/lib/ceph/crash -- . 2026-03-31T20:14:52.287 INFO:tasks.ceph:Compressing logs... 2026-03-31T20:14:52.287 DEBUG:teuthology.orchestra.run.vm01:> time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-31T20:14:52.306 DEBUG:teuthology.orchestra.run.vm03:> time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-31T20:14:52.312 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.23873.log 2026-03-31T20:14:52.312 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24853.log 2026-03-31T20:14:52.312 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.23873.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19819.log 2026-03-31T20:14:52.312 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.23873.log.gz 2026-03-31T20:14:52.313 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24853.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24853.log.gz 2026-03-31T20:14:52.313 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.23757.log 2026-03-31T20:14:52.313 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26412.log 2026-03-31T20:14:52.313 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19819.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19819.log.gz 2026-03-31T20:14:52.313 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.23757.log: gzip -5 --verbose -- /var/log/ceph/ceph.audit.log 2026-03-31T20:14:52.313 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.23757.log.gz 2026-03-31T20:14:52.313 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26412.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24542.log 2026-03-31T20:14:52.313 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26412.log.gz 2026-03-31T20:14:52.314 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24055.log 2026-03-31T20:14:52.314 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph.audit.log: /var/log/ceph/ceph-client.admin.24542.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24542.log.gz 2026-03-31T20:14:52.314 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24156.log 2026-03-31T20:14:52.314 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26946.log 2026-03-31T20:14:52.314 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24055.log: 0.0%/var/log/ceph/ceph-client.admin.24156.log: -- replaced with /var/log/ceph/ceph-client.admin.24055.log.gzgzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24929.log 2026-03-31T20:14:52.314 INFO:teuthology.orchestra.run.vm01.stderr: 2026-03-31T20:14:52.314 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24156.log.gz 2026-03-31T20:14:52.315 DEBUG:teuthology.orchestra.run.vm05:> time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-31T20:14:52.315 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26946.log: 89.6% -- replaced with /var/log/ceph/ceph.audit.log.gz 2026-03-31T20:14:52.315 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26946.log.gz 2026-03-31T20:14:52.315 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26156.log 2026-03-31T20:14:52.315 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24929.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24929.log.gz 2026-03-31T20:14:52.315 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26616.log 2026-03-31T20:14:52.315 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27972.log 2026-03-31T20:14:52.316 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26156.log: /var/log/ceph/ceph-client.admin.26616.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19826.log 2026-03-31T20:14:52.316 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26616.log.gz 2026-03-31T20:14:52.316 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26156.log.gz 2026-03-31T20:14:52.316 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27972.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24980.log 2026-03-31T20:14:52.316 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27972.log.gz 2026-03-31T20:14:52.316 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25851.log 2026-03-31T20:14:52.316 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19826.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19826.log.gz 2026-03-31T20:14:52.316 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24980.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24980.log.gz 2026-03-31T20:14:52.316 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27280.log 2026-03-31T20:14:52.316 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27204.log 2026-03-31T20:14:52.317 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25851.log: /var/log/ceph/ceph-client.admin.27280.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24025.log 2026-03-31T20:14:52.317 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27280.log.gz 2026-03-31T20:14:52.317 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26692.log 2026-03-31T20:14:52.317 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27204.log: 0.0% 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27204.log.gz 2026-03-31T20:14:52.317 INFO:teuthology.orchestra.run.vm01.stderr: -- replaced with /var/log/ceph/ceph-client.admin.25851.log.gz 2026-03-31T20:14:52.317 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27177.log 2026-03-31T20:14:52.317 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24025.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24025.log.gz 2026-03-31T20:14:52.317 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26692.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26692.log.gz 2026-03-31T20:14:52.317 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28277.log 2026-03-31T20:14:52.318 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27177.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27177.log.gz 2026-03-31T20:14:52.318 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28048.log 2026-03-31T20:14:52.318 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25157.log 2026-03-31T20:14:52.318 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28277.log: /var/log/ceph/ceph-client.admin.28048.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24080.log 2026-03-31T20:14:52.319 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28048.log.gz 2026-03-31T20:14:52.319 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28277.log.gz 2026-03-31T20:14:52.319 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25157.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28073.log 2026-03-31T20:14:52.319 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25157.log.gz 2026-03-31T20:14:52.319 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27945.log 2026-03-31T20:14:52.319 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24080.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24080.log.gz 2026-03-31T20:14:52.319 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28073.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28073.log.gz 2026-03-31T20:14:52.319 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24798.log 2026-03-31T20:14:52.319 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24441.log 2026-03-31T20:14:52.320 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27945.log: /var/log/ceph/ceph-client.admin.24798.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28454.log 2026-03-31T20:14:52.320 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24798.log.gz 2026-03-31T20:14:52.320 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27945.log.gz 2026-03-31T20:14:52.320 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24441.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27844.log 2026-03-31T20:14:52.320 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24441.log.gz 2026-03-31T20:14:52.320 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25132.log 2026-03-31T20:14:52.320 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28454.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28454.log.gz 2026-03-31T20:14:52.320 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27844.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27844.log.gz 2026-03-31T20:14:52.320 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24622.log 2026-03-31T20:14:52.321 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24212.log 2026-03-31T20:14:52.321 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25132.log: /var/log/ceph/ceph-client.admin.24622.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19672.log 2026-03-31T20:14:52.321 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24622.log.gz 2026-03-31T20:14:52.321 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25132.log.gz 2026-03-31T20:14:52.321 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24212.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27762.log 2026-03-31T20:14:52.321 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24212.log.gz 2026-03-31T20:14:52.321 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28821.log 2026-03-31T20:14:52.321 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19672.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19672.log.gz 2026-03-31T20:14:52.321 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27762.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27762.log.gz 2026-03-31T20:14:52.322 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28478.log 2026-03-31T20:14:52.322 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27381.log 2026-03-31T20:14:52.322 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28821.log: /var/log/ceph/ceph-client.admin.28478.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28478.log.gz 2026-03-31T20:14:52.322 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28821.log.gz 2026-03-31T20:14:52.322 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26846.log 2026-03-31T20:14:52.322 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26667.log 2026-03-31T20:14:52.323 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27381.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27381.log.gz 2026-03-31T20:14:52.323 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26846.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26846.log.gz 2026-03-31T20:14:52.323 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.0.log 2026-03-31T20:14:52.323 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25538.log 2026-03-31T20:14:52.323 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26309.log 2026-03-31T20:14:52.323 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26667.log: /var/log/ceph/ceph-client.admin.25538.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25538.log.gz 2026-03-31T20:14:52.323 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24389.log 2026-03-31T20:14:52.323 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26667.log.gz 2026-03-31T20:14:52.324 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24517.log 2026-03-31T20:14:52.324 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26309.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26309.log.gz 2026-03-31T20:14:52.324 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24389.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24389.log.gz 2026-03-31T20:14:52.324 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26054.log 2026-03-31T20:14:52.324 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.23949.log 2026-03-31T20:14:52.325 DEBUG:teuthology.orchestra.run.vm06:> time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-31T20:14:52.325 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.1.log 2026-03-31T20:14:52.325 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24517.log: /var/log/ceph/ceph-client.admin.26054.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26054.log.gz 2026-03-31T20:14:52.325 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27253.log 2026-03-31T20:14:52.325 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24517.log.gz 2026-03-31T20:14:52.325 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25775.log 2026-03-31T20:14:52.325 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.23949.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.23949.log.gz 2026-03-31T20:14:52.325 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27253.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.23732.log 2026-03-31T20:14:52.325 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27253.log.gz 2026-03-31T20:14:52.325 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25775.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26488.log 2026-03-31T20:14:52.326 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25775.log.gz 2026-03-31T20:14:52.326 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.23732.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.23732.log.gz 2026-03-31T20:14:52.326 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24413.log 2026-03-31T20:14:52.326 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.20175.log 2026-03-31T20:14:52.326 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26488.log: /var/log/ceph/ceph-client.admin.24413.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24413.log.gz 2026-03-31T20:14:52.326 INFO:teuthology.orchestra.run.vm01.stderr: 0.0%gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.20283.log 2026-03-31T20:14:52.326 INFO:teuthology.orchestra.run.vm01.stderr: -- replaced with /var/log/ceph/ceph-client.admin.26488.log.gz 2026-03-31T20:14:52.327 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27585.log 2026-03-31T20:14:52.327 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.20175.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.20175.log.gz 2026-03-31T20:14:52.327 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.20283.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.20283.log.gz 2026-03-31T20:14:52.327 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24313.log 2026-03-31T20:14:52.327 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25719.log 2026-03-31T20:14:52.327 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27585.log: /var/log/ceph/ceph-client.admin.24313.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24313.log.gz 2026-03-31T20:14:52.328 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27585.log.gzgzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19223.log 2026-03-31T20:14:52.328 INFO:teuthology.orchestra.run.vm01.stderr: 2026-03-31T20:14:52.328 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27816.log 2026-03-31T20:14:52.328 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25719.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25719.log.gz 2026-03-31T20:14:52.328 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19223.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19223.log.gz 2026-03-31T20:14:52.328 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27558.log 2026-03-31T20:14:52.328 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-mgr.x.log 2026-03-31T20:14:52.329 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27816.log: /var/log/ceph/ceph-client.admin.27558.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27558.log.gz 2026-03-31T20:14:52.329 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27816.log.gz 2026-03-31T20:14:52.329 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25334.log 2026-03-31T20:14:52.329 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26363.log 2026-03-31T20:14:52.329 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-mgr.x.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27098.log 2026-03-31T20:14:52.330 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25334.log: /var/log/ceph/ceph-client.admin.26363.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28579.log 2026-03-31T20:14:52.330 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26363.log.gz 2026-03-31T20:14:52.330 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25334.log.gz 2026-03-31T20:14:52.330 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27098.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27098.log.gz 2026-03-31T20:14:52.330 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.23925.log 2026-03-31T20:14:52.330 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24466.log 2026-03-31T20:14:52.331 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28579.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28579.log.gz 2026-03-31T20:14:52.331 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26922.log 2026-03-31T20:14:52.331 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.23925.log: /var/log/ceph/ceph-client.admin.24466.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27481.log 0.0% 2026-03-31T20:14:52.331 INFO:teuthology.orchestra.run.vm01.stderr: -- replaced with /var/log/ceph/ceph-client.admin.24466.log.gz 2026-03-31T20:14:52.331 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.23925.log.gz 2026-03-31T20:14:52.331 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26922.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26922.log.gz 2026-03-31T20:14:52.332 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28201.log 2026-03-31T20:14:52.332 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27997.log 2026-03-31T20:14:52.332 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27481.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27481.log.gz 2026-03-31T20:14:52.332 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24646.log 2026-03-31T20:14:52.332 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28201.log: /var/log/ceph/ceph-client.admin.27997.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26770.log 2026-03-31T20:14:52.332 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27997.log.gz 2026-03-31T20:14:52.333 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28201.log.gz 2026-03-31T20:14:52.333 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24646.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24646.log.gz 2026-03-31T20:14:52.333 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.3.log 2026-03-31T20:14:52.333 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.20127.log 2026-03-31T20:14:52.333 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.29055.log 2026-03-31T20:14:52.333 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26770.log: /var/log/ceph/ceph-client.admin.20127.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.20127.log.gz 2026-03-31T20:14:52.333 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26770.log.gz 2026-03-31T20:14:52.333 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28353.log 2026-03-31T20:14:52.334 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.29079.log 2026-03-31T20:14:52.334 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.29055.log: /var/log/ceph/ceph-client.admin.28353.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28353.log.gz 2026-03-31T20:14:52.334 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.20003.log 2026-03-31T20:14:52.334 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.29055.log.gz 2026-03-31T20:14:52.334 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.2.log 2026-03-31T20:14:52.334 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25005.log 2026-03-31T20:14:52.334 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.20003.log: /var/log/ceph/ceph-client.admin.29079.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.29079.log.gz 2026-03-31T20:14:52.334 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.20003.log.gz 2026-03-31T20:14:52.335 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28717.log 2026-03-31T20:14:52.335 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25386.log 2026-03-31T20:14:52.335 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25005.log: /var/log/ceph/ceph-client.admin.28717.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28717.log.gz 2026-03-31T20:14:52.335 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25486.log 2026-03-31T20:14:52.335 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25005.log.gz 2026-03-31T20:14:52.335 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19633.log 2026-03-31T20:14:52.335 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25486.log: /var/log/ceph/ceph-client.admin.25386.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25486.log.gz 2026-03-31T20:14:52.335 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25386.log.gz 2026-03-31T20:14:52.336 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27786.log 2026-03-31T20:14:52.336 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26027.log 2026-03-31T20:14:52.336 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19633.log: /var/log/ceph/ceph-client.admin.27786.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27786.log.gz 2026-03-31T20:14:52.336 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27329.log 2026-03-31T20:14:52.336 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19633.log.gz 2026-03-31T20:14:52.336 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28743.log 2026-03-31T20:14:52.337 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27329.log: /var/log/ceph/ceph-client.admin.26027.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27329.log.gz 2026-03-31T20:14:52.337 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26027.log.gz 2026-03-31T20:14:52.337 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.20335.log 2026-03-31T20:14:52.337 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24104.log 2026-03-31T20:14:52.337 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28743.log: 0.0%/var/log/ceph/ceph-client.admin.20335.log: -- replaced with /var/log/ceph/ceph-client.admin.28743.log.gz 2026-03-31T20:14:52.337 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.20182.log 2026-03-31T20:14:52.337 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.20335.log.gz 2026-03-31T20:14:52.338 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24104.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24104.log.gz 2026-03-31T20:14:52.338 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27305.log 2026-03-31T20:14:52.338 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25361.log 2026-03-31T20:14:52.339 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.20182.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.20182.log.gz 2026-03-31T20:14:52.339 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.23634.log 2026-03-31T20:14:52.339 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27305.log: /var/log/ceph/ceph-client.admin.25361.log: 0.0% 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25361.log.gz 2026-03-31T20:14:52.339 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26821.log 2026-03-31T20:14:52.339 INFO:teuthology.orchestra.run.vm01.stderr: -- replaced with /var/log/ceph/ceph-client.admin.27305.log.gz 2026-03-31T20:14:52.339 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.23634.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.23634.log.gz 2026-03-31T20:14:52.339 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.21572.log 2026-03-31T20:14:52.340 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24337.log 2026-03-31T20:14:52.340 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27405.log 2026-03-31T20:14:52.340 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26821.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26821.log.gz 2026-03-31T20:14:52.340 INFO:teuthology.orchestra.run.vm01.stderr:gzip/var/log/ceph/ceph-client.admin.24337.log: -5 --verbose -- /var/log/ceph/ceph-client.admin.28977.log 2026-03-31T20:14:52.340 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24337.log.gz 2026-03-31T20:14:52.340 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24597.log 2026-03-31T20:14:52.341 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27405.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27405.log.gz 2026-03-31T20:14:52.341 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28977.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28977.log.gz 2026-03-31T20:14:52.341 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.21596.log 2026-03-31T20:14:52.341 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.23706.log 2026-03-31T20:14:52.341 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.21500.log 2026-03-31T20:14:52.342 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27022.log 2026-03-31T20:14:52.342 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24597.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24597.log.gz 2026-03-31T20:14:52.342 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28226.log 2026-03-31T20:14:52.342 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.23706.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.23706.log.gz 2026-03-31T20:14:52.342 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27022.log: gzip -5 --verbose -- 0.0% /var/log/ceph/ceph-client.admin.27356.log 2026-03-31T20:14:52.342 INFO:teuthology.orchestra.run.vm01.stderr: -- replaced with /var/log/ceph/ceph-client.admin.27022.log.gz 2026-03-31T20:14:52.342 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/ceph-client.admin.21572.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.21572.log.gz 2026-03-31T20:14:52.342 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/ceph-client.admin.21596.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.21596.log.gz 2026-03-31T20:14:52.342 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.21620.log 2026-03-31T20:14:52.342 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24828.log 2026-03-31T20:14:52.342 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28226.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28226.log.gz 2026-03-31T20:14:52.343 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/ceph-client.admin.21500.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.21548.log 2026-03-31T20:14:52.343 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24673.log 2026-03-31T20:14:52.343 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27356.log: 0.0%/var/log/ceph/ceph-client.admin.24828.log: -- replaced with /var/log/ceph/ceph-client.admin.27356.log.gz 2026-03-31T20:14:52.343 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24828.log.gz 2026-03-31T20:14:52.343 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph.tmp-client.admin.19104.log 2026-03-31T20:14:52.343 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.4.log 2026-03-31T20:14:52.343 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24749.log 2026-03-31T20:14:52.343 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/ceph-client.admin.21620.log: /var/log/ceph/ceph-client.admin.21548.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.21524.log 2026-03-31T20:14:52.343 INFO:teuthology.orchestra.run.vm06.stderr: 2026-03-31T20:14:52.343 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.21548.log.gz: No space left on device 2026-03-31T20:14:52.343 INFO:teuthology.orchestra.run.vm06.stderr: 2026-03-31T20:14:52.343 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.21500.log.gz: No space left on device 2026-03-31T20:14:52.344 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24673.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24673.log.gz 2026-03-31T20:14:52.344 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26564.log 2026-03-31T20:14:52.344 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph.tmp-client.admin.19104.log: 0.0% -- replaced with /var/log/ceph/ceph.tmp-client.admin.19104.log.gz 2026-03-31T20:14:52.344 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24749.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24749.log.gz 2026-03-31T20:14:52.344 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24180.log 2026-03-31T20:14:52.344 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/ceph-osd.4.log: 2026-03-31T20:14:52.344 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.21620.log.gz: No space left on device 2026-03-31T20:14:52.344 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-osd.5.log 2026-03-31T20:14:52.344 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19666.log 2026-03-31T20:14:52.344 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/ceph-client.admin.21524.log: 2026-03-31T20:14:52.344 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-client.admin.21524.log.gz: No space left on device 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26564.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26564.log.gz 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26131.log 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24180.log: 0.0%/var/log/ceph/ceph-client.admin.19666.log: -- replaced with /var/log/ceph/ceph-client.admin.24180.log.gz 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19666.log.gz 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28124.log 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26131.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27661.log 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26131.log.gz 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28124.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28021.log 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28124.log.gz 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27661.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28326.log 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27661.log.gz 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25209.log 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28021.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28021.log.gz 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26464.log 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28326.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28326.log.gz 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25209.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25209.log.gz 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.20051.log 2026-03-31T20:14:52.347 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26464.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26464.log.gz 2026-03-31T20:14:52.348 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.23756.log 2026-03-31T20:14:52.348 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26003.log 2026-03-31T20:14:52.348 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.20051.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.20051.log.gz 2026-03-31T20:14:52.348 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.23756.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19555.log 2026-03-31T20:14:52.348 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27432.log 2026-03-31T20:14:52.348 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26003.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26003.log.gz 2026-03-31T20:14:52.348 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19555.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19555.log.gz 2026-03-31T20:14:52.348 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/ceph-osd.0.log: /var/log/ceph/ceph-osd.1.log: 2026-03-31T20:14:52.348 INFO:teuthology.orchestra.run.vm03.stderr:gzip: /var/log/ceph/ceph-osd.0.log.gz: No space left on device 2026-03-31T20:14:52.349 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.23900.log 2026-03-31T20:14:52.349 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27432.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24261.log 2026-03-31T20:14:52.349 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27432.log.gz 2026-03-31T20:14:52.349 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.23900.log: gzip 0.0% -5 -- replaced with /var/log/ceph/ceph-client.admin.23900.log.gz --verbose 2026-03-31T20:14:52.349 INFO:teuthology.orchestra.run.vm01.stderr: -- /var/log/ceph/ceph-client.admin.25029.log 2026-03-31T20:14:52.350 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24261.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24261.log.gz 2026-03-31T20:14:52.350 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26257.log 2026-03-31T20:14:52.350 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25029.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24364.log 2026-03-31T20:14:52.350 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25029.log.gz 2026-03-31T20:14:52.351 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26257.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28530.log 2026-03-31T20:14:52.351 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26257.log.gz 2026-03-31T20:14:52.351 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-31T20:14:52.351 INFO:teuthology.orchestra.run.vm03.stderr:gzip: /var/log/ceph/ceph-osd.1.log.gz: No space left on device 2026-03-31T20:14:52.351 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24364.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24364.log.gz 2026-03-31T20:14:52.351 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27686.log 2026-03-31T20:14:52.351 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28530.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25978.log 2026-03-31T20:14:52.352 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28530.log.gz 2026-03-31T20:14:52.352 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27686.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27686.log.gz 2026-03-31T20:14:52.352 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28302.log 2026-03-31T20:14:52.352 INFO:teuthology.orchestra.run.vm01.stderr: 94.8% -- replaced with /var/log/ceph/ceph-client.admin.23756.log.gz 2026-03-31T20:14:52.352 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19971.log 2026-03-31T20:14:52.352 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-31T20:14:52.352 INFO:teuthology.orchestra.run.vm03.stderr:real 0m0.037s 2026-03-31T20:14:52.352 INFO:teuthology.orchestra.run.vm03.stderr:user 0m0.054s 2026-03-31T20:14:52.352 INFO:teuthology.orchestra.run.vm03.stderr:sys 0m0.008s 2026-03-31T20:14:52.352 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25978.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25978.log.gz 2026-03-31T20:14:52.352 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28302.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28302.log.gz 2026-03-31T20:14:52.353 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27896.log 2026-03-31T20:14:52.353 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19971.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19971.log.gz 2026-03-31T20:14:52.353 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph.log 2026-03-31T20:14:52.353 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27896.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27896.log.gz 2026-03-31T20:14:52.353 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27049.log 2026-03-31T20:14:52.354 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25285.log 2026-03-31T20:14:52.354 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27049.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27049.log.gz 2026-03-31T20:14:52.354 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28378.log 2026-03-31T20:14:52.354 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26208.log 2026-03-31T20:14:52.354 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25285.log: /var/log/ceph/ceph-client.admin.28378.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25285.log.gz 2026-03-31T20:14:52.355 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28378.log.gz 2026-03-31T20:14:52.355 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27610.log 2026-03-31T20:14:52.355 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28250.log 2026-03-31T20:14:52.355 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26208.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26208.log.gz 2026-03-31T20:14:52.355 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27610.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27610.log.gz 2026-03-31T20:14:52.355 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26716.log 2026-03-31T20:14:52.356 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25619.log 2026-03-31T20:14:52.356 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28250.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28250.log.gz 2026-03-31T20:14:52.356 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26716.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26716.log.gz 2026-03-31T20:14:52.356 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26284.log 2026-03-31T20:14:52.356 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19678.log 2026-03-31T20:14:52.357 INFO:teuthology.orchestra.run.vm01.stderr: 88.8% -- replaced with /var/log/ceph/ceph.log.gz/var/log/ceph/ceph-client.admin.25619.log: 2026-03-31T20:14:52.357 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25619.log.gz 2026-03-31T20:14:52.357 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26284.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26284.log.gz 2026-03-31T20:14:52.357 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26388.log 2026-03-31T20:14:52.357 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.23682.log 2026-03-31T20:14:52.357 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19678.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19678.log.gz 2026-03-31T20:14:52.357 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26388.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26388.log.gz 2026-03-31T20:14:52.358 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26180.log 2026-03-31T20:14:52.358 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.23682.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19882.log 2026-03-31T20:14:52.358 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.23682.log.gz 2026-03-31T20:14:52.358 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26180.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27868.log 2026-03-31T20:14:52.358 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26180.log.gz 2026-03-31T20:14:52.358 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.23976.log 2026-03-31T20:14:52.359 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19882.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19882.log.gz 2026-03-31T20:14:52.359 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27868.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25594.log 2026-03-31T20:14:52.359 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27868.log.gz 2026-03-31T20:14:52.359 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.23976.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26591.log 2026-03-31T20:14:52.359 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.23976.log.gz 2026-03-31T20:14:52.359 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28873.log 2026-03-31T20:14:52.359 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25594.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25594.log.gz 2026-03-31T20:14:52.360 INFO:teuthology.orchestra.run.vm05.stderr:/var/log/ceph/ceph-osd.3.log: /var/log/ceph/ceph-osd.2.log: 2026-03-31T20:14:52.360 INFO:teuthology.orchestra.run.vm05.stderr:gzip: /var/log/ceph/ceph-osd.3.log.gz: No space left on device 2026-03-31T20:14:52.360 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-31T20:14:52.360 INFO:teuthology.orchestra.run.vm05.stderr:gzip: /var/log/ceph/ceph-osd.2.log.gz: No space left on device 2026-03-31T20:14:52.360 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26973.log 2026-03-31T20:14:52.360 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26591.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26591.log.gz 2026-03-31T20:14:52.360 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25310.log 2026-03-31T20:14:52.360 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28873.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28873.log.gz 2026-03-31T20:14:52.360 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25181.log 2026-03-31T20:14:52.360 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26973.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26973.log.gz 2026-03-31T20:14:52.360 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25310.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.20309.log 2026-03-31T20:14:52.360 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25310.log.gz 2026-03-31T20:14:52.361 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25181.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28899.log 2026-03-31T20:14:52.361 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25181.log.gz 2026-03-31T20:14:52.361 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25056.log 2026-03-31T20:14:52.361 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-31T20:14:52.361 INFO:teuthology.orchestra.run.vm05.stderr:real 0m0.035s 2026-03-31T20:14:52.361 INFO:teuthology.orchestra.run.vm05.stderr:user 0m0.047s 2026-03-31T20:14:52.361 INFO:teuthology.orchestra.run.vm05.stderr:sys 0m0.014s 2026-03-31T20:14:52.361 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.20309.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.20309.log.gz 2026-03-31T20:14:52.361 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28899.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28899.log.gz 2026-03-31T20:14:52.361 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28429.log 2026-03-31T20:14:52.362 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25902.log 2026-03-31T20:14:52.362 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25056.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25056.log.gz 2026-03-31T20:14:52.362 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28429.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28429.log.gz 2026-03-31T20:14:52.362 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19818.log 2026-03-31T20:14:52.362 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25902.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28097.log 2026-03-31T20:14:52.362 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25902.log.gz 2026-03-31T20:14:52.362 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19818.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.20187.log 2026-03-31T20:14:52.362 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19818.log.gz 2026-03-31T20:14:52.363 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26897.log 2026-03-31T20:14:52.363 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28097.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28097.log.gz 2026-03-31T20:14:52.363 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24953.log 2026-03-31T20:14:52.363 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.20187.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.20187.log.gz 2026-03-31T20:14:52.363 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26897.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28691.log 2026-03-31T20:14:52.363 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26897.log.gz 2026-03-31T20:14:52.364 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24953.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24953.log.gz 2026-03-31T20:14:52.364 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.23829.log 2026-03-31T20:14:52.364 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28402.log 2026-03-31T20:14:52.364 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28691.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28691.log.gz 2026-03-31T20:14:52.364 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27128.log 2026-03-31T20:14:52.365 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.23829.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.23829.log.gz 2026-03-31T20:14:52.365 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28402.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27921.log 2026-03-31T20:14:52.365 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28402.log.gz 2026-03-31T20:14:52.365 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27128.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27128.log.gz 2026-03-31T20:14:52.365 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24722.log 2026-03-31T20:14:52.366 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27921.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27457.log 2026-03-31T20:14:52.366 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27921.log.gz 2026-03-31T20:14:52.366 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19671.log 2026-03-31T20:14:52.366 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24722.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24722.log.gz 2026-03-31T20:14:52.366 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27457.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26439.log 2026-03-31T20:14:52.366 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27457.log.gz 2026-03-31T20:14:52.366 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19671.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19671.log.gz 2026-03-31T20:14:52.367 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25799.log 2026-03-31T20:14:52.367 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26439.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19915.log 2026-03-31T20:14:52.367 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26439.log.gz 2026-03-31T20:14:52.367 INFO:teuthology.orchestra.run.vm06.stderr:/var/log/ceph/ceph-osd.5.log: 2026-03-31T20:14:52.367 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-osd.4.log.gz: No space left on device 2026-03-31T20:14:52.367 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25799.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26870.log 2026-03-31T20:14:52.367 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25799.log.gz 2026-03-31T20:14:52.368 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19915.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24131.log 2026-03-31T20:14:52.368 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19915.log.gz 2026-03-31T20:14:52.368 INFO:teuthology.orchestra.run.vm06.stderr: 2026-03-31T20:14:52.368 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /var/log/ceph/ceph-osd.5.log.gz: No space left on device 2026-03-31T20:14:52.368 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26870.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26870.log.gz 2026-03-31T20:14:52.368 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25750.log 2026-03-31T20:14:52.368 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24131.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19975.log 2026-03-31T20:14:52.369 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24131.log.gz 2026-03-31T20:14:52.369 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25750.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25750.log.gz 2026-03-31T20:14:52.369 INFO:teuthology.orchestra.run.vm06.stderr: 2026-03-31T20:14:52.369 INFO:teuthology.orchestra.run.vm06.stderr:real 0m0.036s 2026-03-31T20:14:52.369 INFO:teuthology.orchestra.run.vm06.stderr:user 0m0.054s 2026-03-31T20:14:52.369 INFO:teuthology.orchestra.run.vm06.stderr:sys 0m0.011s 2026-03-31T20:14:52.369 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26998.log 2026-03-31T20:14:52.369 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19975.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19975.log.gz 2026-03-31T20:14:52.369 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28951.log 2026-03-31T20:14:52.370 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26998.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19581.log 2026-03-31T20:14:52.370 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26998.log.gz 2026-03-31T20:14:52.370 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28951.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25437.log 2026-03-31T20:14:52.370 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28951.log.gz 2026-03-31T20:14:52.370 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19581.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19581.log.gz 2026-03-31T20:14:52.370 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25234.log 2026-03-31T20:14:52.371 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24774.log 2026-03-31T20:14:52.371 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25437.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25437.log.gz 2026-03-31T20:14:52.371 INFO:teuthology.orchestra.run.vm01.stderr:gzip/var/log/ceph/ceph-client.admin.25234.log: -5 --verbose -- /var/log/ceph/ceph-client.admin.25875.log 2026-03-31T20:14:52.371 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25234.log.gz 2026-03-31T20:14:52.371 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24774.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25695.log 2026-03-31T20:14:52.371 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24774.log.gz 2026-03-31T20:14:52.372 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25875.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25875.log.gz 2026-03-31T20:14:52.372 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28555.log 2026-03-31T20:14:52.372 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25695.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28769.log 2026-03-31T20:14:52.372 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25695.log.gz 2026-03-31T20:14:52.372 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25258.log 2026-03-31T20:14:52.373 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28555.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28555.log.gz 2026-03-31T20:14:52.373 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28769.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25927.log 2026-03-31T20:14:52.373 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28769.log.gz 2026-03-31T20:14:52.373 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25258.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25258.log.gz 2026-03-31T20:14:52.373 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27634.log 2026-03-31T20:14:52.374 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25927.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24566.log 2026-03-31T20:14:52.374 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25927.log.gz 2026-03-31T20:14:52.374 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27634.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27737.log 2026-03-31T20:14:52.374 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27634.log.gz 2026-03-31T20:14:52.374 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24566.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24566.log.gz 2026-03-31T20:14:52.374 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26745.log 2026-03-31T20:14:52.374 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26079.log 2026-03-31T20:14:52.375 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27737.log: /var/log/ceph/ceph-client.admin.26745.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27153.log 2026-03-31T20:14:52.375 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26745.log.gz 2026-03-31T20:14:52.375 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27737.log.gz 2026-03-31T20:14:52.375 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26079.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26079.log.gz 2026-03-31T20:14:52.375 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19298.log 2026-03-31T20:14:52.375 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19675.log 2026-03-31T20:14:52.376 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27153.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27153.log.gz 2026-03-31T20:14:52.376 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19298.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28149.log 2026-03-31T20:14:52.376 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19298.log.gz 2026-03-31T20:14:52.376 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19675.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26515.log 2026-03-31T20:14:52.376 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19675.log.gz 2026-03-31T20:14:52.376 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25643.log 2026-03-31T20:14:52.377 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28149.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28149.log.gz 2026-03-31T20:14:52.377 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26515.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.29003.log 2026-03-31T20:14:52.377 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26515.log.gz 2026-03-31T20:14:52.377 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25643.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.23658.log 2026-03-31T20:14:52.377 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25643.log.gz 2026-03-31T20:14:52.377 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.29003.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.29003.log.gz 2026-03-31T20:14:52.377 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.20042.log 2026-03-31T20:14:52.378 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26233.log 2026-03-31T20:14:52.378 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.23658.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.23658.log.gz 2026-03-31T20:14:52.378 INFO:teuthology.orchestra.run.vm01.stderr:gzip/var/log/ceph/ceph-client.admin.20042.log: -5 --verbose -- /var/log/ceph/ceph-client.admin.28847.log 2026-03-31T20:14:52.378 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.20042.log.gz 2026-03-31T20:14:52.378 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26233.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26640.log 2026-03-31T20:14:52.378 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26233.log.gz 2026-03-31T20:14:52.379 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25562.log 2026-03-31T20:14:52.379 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28847.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28847.log.gz 2026-03-31T20:14:52.379 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26640.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25105.log 2026-03-31T20:14:52.379 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26640.log.gz 2026-03-31T20:14:52.379 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25562.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25670.log 2026-03-31T20:14:52.379 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25562.log.gz 2026-03-31T20:14:52.379 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28639.log 2026-03-31T20:14:52.380 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25105.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25105.log.gz 2026-03-31T20:14:52.380 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25670.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24490.log 2026-03-31T20:14:52.380 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25670.log.gz 2026-03-31T20:14:52.380 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28639.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.20002.log 2026-03-31T20:14:52.380 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28639.log.gz 2026-03-31T20:14:52.380 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24490.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24490.log.gz 2026-03-31T20:14:52.380 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26103.log 2026-03-31T20:14:52.381 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26540.log 2026-03-31T20:14:52.381 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.20002.log: /var/log/ceph/ceph-client.admin.26103.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28925.log 2026-03-31T20:14:52.381 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.20002.log.gz 2026-03-31T20:14:52.381 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26103.log.gz 2026-03-31T20:14:52.381 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26540.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26540.log.gz 2026-03-31T20:14:52.382 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27534.log 2026-03-31T20:14:52.382 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24904.log 2026-03-31T20:14:52.382 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28925.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28925.log.gz 2026-03-31T20:14:52.382 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27534.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28173.log 2026-03-31T20:14:52.382 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27534.log.gz 2026-03-31T20:14:52.382 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24904.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24904.log.gz 2026-03-31T20:14:52.382 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28665.log 2026-03-31T20:14:52.383 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27509.log 2026-03-31T20:14:52.383 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28173.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28173.log.gz 2026-03-31T20:14:52.383 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28506.log 2026-03-31T20:14:52.383 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28665.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28665.log.gz 2026-03-31T20:14:52.383 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27509.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27509.log.gz 2026-03-31T20:14:52.383 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27074.log 2026-03-31T20:14:52.383 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.20232.log 2026-03-31T20:14:52.383 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28506.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28506.log.gz 2026-03-31T20:14:52.384 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19827.log 2026-03-31T20:14:52.384 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27074.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27074.log.gz 2026-03-31T20:14:52.384 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.20232.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25826.log 2026-03-31T20:14:52.384 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.20232.log.gz 2026-03-31T20:14:52.384 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27229.log 2026-03-31T20:14:52.384 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19827.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19827.log.gz 2026-03-31T20:14:52.384 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28613.log 2026-03-31T20:14:52.384 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25826.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25826.log.gz 2026-03-31T20:14:52.384 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27229.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.29029.log 2026-03-31T20:14:52.384 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27229.log.gz 2026-03-31T20:14:52.385 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28613.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28613.log.gz 2026-03-31T20:14:52.385 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19337.log 2026-03-31T20:14:52.385 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.29029.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19607.log 2026-03-31T20:14:52.385 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.29029.log.gz 2026-03-31T20:14:52.385 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19337.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19363.log 2026-03-31T20:14:52.385 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19337.log.gz 2026-03-31T20:14:52.386 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24698.log 2026-03-31T20:14:52.386 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19607.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19607.log.gz 2026-03-31T20:14:52.386 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25513.log 2026-03-31T20:14:52.386 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19363.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19363.log.gz 2026-03-31T20:14:52.386 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24698.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24237.log 2026-03-31T20:14:52.386 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24698.log.gz 2026-03-31T20:14:52.386 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25513.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25513.log.gz 2026-03-31T20:14:52.386 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26794.log 2026-03-31T20:14:52.387 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.28795.log 2026-03-31T20:14:52.387 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24237.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24237.log.gz 2026-03-31T20:14:52.387 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19469.log 2026-03-31T20:14:52.387 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26794.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26794.log.gz 2026-03-31T20:14:52.387 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.28795.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19679.log 2026-03-31T20:14:52.387 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.28795.log.gz 2026-03-31T20:14:52.387 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19469.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19469.log.gz 2026-03-31T20:14:52.388 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19494.log 2026-03-31T20:14:52.388 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.20149.log 2026-03-31T20:14:52.388 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19679.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19679.log.gz 2026-03-31T20:14:52.388 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.27710.log 2026-03-31T20:14:52.388 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19494.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19494.log.gz 2026-03-31T20:14:52.388 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.20149.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25462.log 2026-03-31T20:14:52.388 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.20149.log.gz 2026-03-31T20:14:52.389 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.27710.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.27710.log.gz 2026-03-31T20:14:52.389 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.19520.log 2026-03-31T20:14:52.389 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25951.log 2026-03-31T20:14:52.389 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25462.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25462.log.gz 2026-03-31T20:14:52.389 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.19520.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.19520.log.gz 2026-03-31T20:14:52.389 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24001.log 2026-03-31T20:14:52.390 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25951.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24877.log 2026-03-31T20:14:52.390 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25951.log.gz 2026-03-31T20:14:52.390 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25081.log 2026-03-31T20:14:52.390 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24001.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24001.log.gz 2026-03-31T20:14:52.390 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.24877.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24877.log.gz 2026-03-31T20:14:52.390 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.25410.log 2026-03-31T20:14:52.390 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25081.log: gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.26333.log 2026-03-31T20:14:52.390 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25081.log.gz 2026-03-31T20:14:52.391 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.25410.log: gzip -5 --verbose -- /var/log/ceph/ceph-mon.a.log 2026-03-31T20:14:52.391 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.25410.log.gz 2026-03-31T20:14:52.391 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-client.admin.26333.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.26333.log.gz 2026-03-31T20:14:52.391 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /var/log/ceph/ceph-client.admin.24288.log 2026-03-31T20:14:52.392 INFO:teuthology.orchestra.run.vm01.stderr:/var/log/ceph/ceph-mon.a.log: /var/log/ceph/ceph-client.admin.24288.log: 0.0% -- replaced with /var/log/ceph/ceph-client.admin.24288.log.gz 2026-03-31T20:14:52.543 INFO:teuthology.orchestra.run.vm01.stderr: 90.5% -- replaced with /var/log/ceph/ceph-mgr.x.log.gz 2026-03-31T20:14:52.977 INFO:teuthology.orchestra.run.vm01.stderr: 89.6% -- replaced with /var/log/ceph/ceph-mon.a.log.gz 2026-03-31T20:14:52.978 INFO:teuthology.orchestra.run.vm01.stderr: 2026-03-31T20:14:52.978 INFO:teuthology.orchestra.run.vm01.stderr:real 0m0.671s 2026-03-31T20:14:52.978 INFO:teuthology.orchestra.run.vm01.stderr:user 0m0.894s 2026-03-31T20:14:52.978 INFO:teuthology.orchestra.run.vm01.stderr:sys 0m0.084s 2026-03-31T20:14:52.978 DEBUG:teuthology.orchestra.run:got remote process result: 123 2026-03-31T20:14:52.978 ERROR:teuthology.run_tasks:Manager failed: ceph Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa/tasks/ceph.py", line 1996, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa/tasks/ceph.py", line 263, in ceph_log run.wait( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm03 with status 123: "time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose --" 2026-03-31T20:14:52.979 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-31T20:14:52.982 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/kshtsk/teuthology/teuthology/task/install/__init__.py", line 644, in task yield File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa/tasks/ceph.py", line 1996, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa/tasks/ceph.py", line 263, in ceph_log run.wait( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm03 with status 123: "time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose --" 2026-03-31T20:14:52.982 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-31T20:14:52.982 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-31T20:14:53.021 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-31T20:14:53.023 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-31T20:14:53.024 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-31T20:14:53.039 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-31T20:14:53.039 DEBUG:teuthology.orchestra.run.vm01:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-31T20:14:53.044 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-31T20:14:53.044 DEBUG:teuthology.orchestra.run.vm03:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-31T20:14:53.048 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-31T20:14:53.048 DEBUG:teuthology.orchestra.run.vm05:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-31T20:14:53.052 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-31T20:14:53.052 DEBUG:teuthology.orchestra.run.vm06:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-31T20:14:53.104 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:14:53.104 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:53.107 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:53.108 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-31T20:14:53.284 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-31T20:14:53.284 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-31T20:14:53.285 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:14:53.286 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:14:53.287 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:53.287 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:53.288 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:53.288 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:14:53.439 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.2 has been restored 2026-03-31T20:14:53.452 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:53.452 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:53.452 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:53.453 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:53.473 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:14:53.473 INFO:teuthology.orchestra.run.vm05.stdout: ceph* 2026-03-31T20:14:53.486 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:53.486 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:53.487 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:53.487 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:53.499 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:53.499 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:53.500 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:53.500 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:53.504 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:53.504 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:53.505 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:53.505 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:53.518 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-31T20:14:53.518 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:14:53.519 INFO:teuthology.orchestra.run.vm01.stdout: ceph* 2026-03-31T20:14:53.519 INFO:teuthology.orchestra.run.vm06.stdout: ceph* 2026-03-31T20:14:53.524 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:14:53.525 INFO:teuthology.orchestra.run.vm03.stdout: ceph* 2026-03-31T20:14:53.636 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:53.637 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:53.640 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:53.640 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-31T20:14:53.640 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:53.640 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:53.640 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:53.640 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:53.658 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:53.674 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:53.674 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:53.679 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:53.679 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-31T20:14:53.679 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:53.679 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:53.679 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:53.679 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:53.693 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:53.693 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-31T20:14:53.698 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:53.711 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:53.711 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:53.715 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:53.715 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-31T20:14:53.716 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:53.716 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:53.716 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:53.716 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:53.736 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:14:53.748 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 126188 files and directories currently installed.) 2026-03-31T20:14:53.750 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:14:53.850 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:53.850 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:53.879 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:53.880 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:14:53.922 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:14:53.922 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:14:54.032 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:54.032 INFO:teuthology.orchestra.run.vm05.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:54.033 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-31T20:14:54.033 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:54.045 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:14:54.046 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm* cephadm* 2026-03-31T20:14:54.088 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:54.088 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:54.088 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-31T20:14:54.088 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:54.101 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:14:54.102 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm* cephadm* 2026-03-31T20:14:54.113 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:54.113 INFO:teuthology.orchestra.run.vm06.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:54.113 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-31T20:14:54.113 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:54.124 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:14:54.124 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-cephadm* cephadm* 2026-03-31T20:14:54.199 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:54.199 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:54.204 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 2 to remove and 50 not upgraded. 2026-03-31T20:14:54.204 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 2177 kB disk space will be freed. 2026-03-31T20:14:54.204 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:54.204 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:54.204 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:54.204 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:54.222 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:54.245 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:54.245 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:54.250 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 2 to remove and 50 not upgraded. 2026-03-31T20:14:54.250 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 2177 kB disk space will be freed. 2026-03-31T20:14:54.250 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:54.250 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:54.250 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:54.250 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:54.260 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:54.260 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:54.265 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 2 to remove and 50 not upgraded. 2026-03-31T20:14:54.265 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 2177 kB disk space will be freed. 2026-03-31T20:14:54.265 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:54.265 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:54.265 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:54.265 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:54.269 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:54.283 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:14:54.412 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:54.412 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:54.460 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:54.460 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:14:54.474 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:14:54.474 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:14:54.586 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:54.586 INFO:teuthology.orchestra.run.vm05.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:54.586 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:54.586 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:54.598 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:14:54.599 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds* 2026-03-31T20:14:54.647 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:54.647 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:54.647 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:54.647 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:54.658 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:54.659 INFO:teuthology.orchestra.run.vm06.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:54.659 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:54.659 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:54.659 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:14:54.660 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds* 2026-03-31T20:14:54.672 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:14:54.673 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mds* 2026-03-31T20:14:54.739 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:54.739 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:54.743 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:54.743 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 6851 kB disk space will be freed. 2026-03-31T20:14:54.743 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:54.743 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:54.743 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:54.743 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:54.762 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:54.802 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:54.803 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:54.806 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:54.806 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 6851 kB disk space will be freed. 2026-03-31T20:14:54.807 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:54.807 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:54.807 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:54.807 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:54.808 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:54.808 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:54.812 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:54.812 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 6851 kB disk space will be freed. 2026-03-31T20:14:54.812 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:54.812 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:54.812 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:54.813 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:54.825 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:54.831 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:14:54.939 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:54.947 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:54.947 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:54.974 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-31T20:14:55.010 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:55.010 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:14:55.013 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:14:55.014 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:14:55.104 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:55.105 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-31T20:14:55.105 INFO:teuthology.orchestra.run.vm05.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-31T20:14:55.105 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh python3-cachetools python3-cheroot python3-cherrypy3 2026-03-31T20:14:55.105 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:14:55.105 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:14:55.105 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-natsort python3-portend python3-psutil 2026-03-31T20:14:55.105 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-31T20:14:55.105 INFO:teuthology.orchestra.run.vm05.stdout: python3-simplejson python3-sklearn python3-sklearn-lib python3-tempora 2026-03-31T20:14:55.105 INFO:teuthology.orchestra.run.vm05.stdout: python3-threadpoolctl python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:55.105 INFO:teuthology.orchestra.run.vm05.stdout: sg3-utils sg3-utils-udev 2026-03-31T20:14:55.105 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:55.114 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:14:55.114 INFO:teuthology.orchestra.run.vm05.stdout: ceph* ceph-mgr* ceph-mgr-cephadm* ceph-mgr-dashboard* 2026-03-31T20:14:55.115 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:55.141 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-31T20:14:55.142 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-31T20:14:55.180 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:55.180 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-31T20:14:55.181 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-31T20:14:55.181 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh python3-cachetools python3-cheroot python3-cherrypy3 2026-03-31T20:14:55.181 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:14:55.181 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:14:55.181 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes python3-natsort python3-portend python3-psutil 2026-03-31T20:14:55.181 INFO:teuthology.orchestra.run.vm06.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-31T20:14:55.181 INFO:teuthology.orchestra.run.vm06.stdout: python3-simplejson python3-sklearn python3-sklearn-lib python3-tempora 2026-03-31T20:14:55.181 INFO:teuthology.orchestra.run.vm06.stdout: python3-threadpoolctl python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:55.181 INFO:teuthology.orchestra.run.vm06.stdout: sg3-utils sg3-utils-udev 2026-03-31T20:14:55.181 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:55.182 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:55.182 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-31T20:14:55.183 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-31T20:14:55.183 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh python3-cachetools python3-cheroot python3-cherrypy3 2026-03-31T20:14:55.183 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:14:55.183 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:14:55.183 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-natsort python3-portend python3-psutil 2026-03-31T20:14:55.183 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-31T20:14:55.183 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-sklearn python3-sklearn-lib python3-tempora 2026-03-31T20:14:55.183 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:55.183 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils sg3-utils-udev 2026-03-31T20:14:55.183 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:55.197 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:14:55.197 INFO:teuthology.orchestra.run.vm03.stdout: ceph* ceph-mgr* ceph-mgr-cephadm* ceph-mgr-dashboard* 2026-03-31T20:14:55.197 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:55.198 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:14:55.198 INFO:teuthology.orchestra.run.vm06.stdout: ceph* ceph-mgr* ceph-mgr-cephadm* ceph-mgr-dashboard* 2026-03-31T20:14:55.199 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:55.255 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:55.255 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:55.259 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 6 to remove and 50 not upgraded. 2026-03-31T20:14:55.259 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 220 MB disk space will be freed. 2026-03-31T20:14:55.259 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:55.259 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:55.259 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:55.259 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:55.267 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:55.267 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:55.267 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-31T20:14:55.267 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:55.277 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:55.280 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-31T20:14:55.282 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm* cephadm* 2026-03-31T20:14:55.347 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:55.347 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:55.347 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:55.347 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:55.351 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 6 to remove and 50 not upgraded. 2026-03-31T20:14:55.351 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 220 MB disk space will be freed. 2026-03-31T20:14:55.351 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:55.351 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:55.351 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:55.351 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:55.352 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 6 to remove and 50 not upgraded. 2026-03-31T20:14:55.352 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 220 MB disk space will be freed. 2026-03-31T20:14:55.352 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:55.352 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:55.352 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:55.352 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:55.369 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:14:55.371 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:55.455 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 2 to remove and 50 not upgraded. 2026-03-31T20:14:55.456 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 2177 kB disk space will be freed. 2026-03-31T20:14:55.463 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:55.463 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:55.497 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 126186 files and directories currently installed.) 2026-03-31T20:14:55.499 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:14:55.509 INFO:teuthology.orchestra.run.vm01.stdout:dpkg: warning: while removing ceph-mgr-cephadm, directory '/usr/share/ceph/mgr/cephadm/services' not empty so not removed 2026-03-31T20:14:55.519 INFO:teuthology.orchestra.run.vm01.stdout:Removing cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:14:55.548 INFO:teuthology.orchestra.run.vm01.stdout:Looking for files to backup/remove ... 2026-03-31T20:14:55.549 INFO:teuthology.orchestra.run.vm01.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-31T20:14:55.552 INFO:teuthology.orchestra.run.vm01.stdout:Removing user `cephadm' ... 2026-03-31T20:14:55.552 INFO:teuthology.orchestra.run.vm01.stdout:Warning: group `nogroup' has no more members. 2026-03-31T20:14:55.555 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:14:55.555 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:14:55.561 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:55.561 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:14:55.563 INFO:teuthology.orchestra.run.vm01.stdout:Done. 2026-03-31T20:14:55.586 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh python3-cachetools python3-ceph-common python3-cheroot 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout: python3-joblib python3-kubernetes python3-natsort python3-portend 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable python3-psutil python3-repoze.lru 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplejson 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempora python3-threadpoolctl 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout: python3-wcwidth python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout: sg3-utils sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:14:55.588 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:55.601 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:14:55.601 INFO:teuthology.orchestra.run.vm05.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:55.601 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:55.601 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-31T20:14:55.705 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 126100 files and directories currently installed.) 2026-03-31T20:14:55.708 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for cephadm (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:14:55.718 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:55.718 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:55.718 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:55.718 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-31T20:14:55.718 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh python3-cachetools python3-ceph-common python3-cheroot 2026-03-31T20:14:55.718 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-31T20:14:55.718 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-31T20:14:55.718 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-natsort python3-portend 2026-03-31T20:14:55.718 INFO:teuthology.orchestra.run.vm06.stdout: python3-prettytable python3-psutil python3-repoze.lru 2026-03-31T20:14:55.719 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplejson 2026-03-31T20:14:55.719 INFO:teuthology.orchestra.run.vm06.stdout: python3-sklearn python3-sklearn-lib python3-tempora python3-threadpoolctl 2026-03-31T20:14:55.719 INFO:teuthology.orchestra.run.vm06.stdout: python3-wcwidth python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:55.719 INFO:teuthology.orchestra.run.vm06.stdout: sg3-utils sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:14:55.719 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:55.730 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:55.730 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:55.730 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:55.730 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-31T20:14:55.730 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh python3-cachetools python3-ceph-common python3-cheroot 2026-03-31T20:14:55.730 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-31T20:14:55.730 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-31T20:14:55.730 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-natsort python3-portend 2026-03-31T20:14:55.730 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-repoze.lru 2026-03-31T20:14:55.730 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplejson 2026-03-31T20:14:55.730 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempora python3-threadpoolctl 2026-03-31T20:14:55.730 INFO:teuthology.orchestra.run.vm03.stdout: python3-wcwidth python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:55.731 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:14:55.731 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:55.731 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:14:55.731 INFO:teuthology.orchestra.run.vm06.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:55.731 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:55.732 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-31T20:14:55.745 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:14:55.745 INFO:teuthology.orchestra.run.vm03.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:55.745 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:55.746 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-31T20:14:55.748 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:55.748 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:55.753 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 14 to remove and 50 not upgraded. 2026-03-31T20:14:55.753 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 959 MB disk space will be freed. 2026-03-31T20:14:55.753 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:55.753 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:55.753 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:55.753 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:55.772 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:55.877 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:55.878 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:55.881 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 14 to remove and 50 not upgraded. 2026-03-31T20:14:55.881 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 959 MB disk space will be freed. 2026-03-31T20:14:55.881 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:55.881 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:55.881 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:55.882 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:55.886 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:55.886 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:55.890 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 14 to remove and 50 not upgraded. 2026-03-31T20:14:55.890 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 959 MB disk space will be freed. 2026-03-31T20:14:55.891 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:55.891 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:55.891 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:55.891 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:55.901 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:14:55.909 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:55.970 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:55.970 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:56.082 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:56.082 INFO:teuthology.orchestra.run.vm05.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:56.082 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:56.082 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:56.092 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:14:56.092 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse* 2026-03-31T20:14:56.094 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:14:56.094 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:14:56.100 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:56.100 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:14:56.238 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:56.239 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:56.243 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:56.243 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 2932 kB disk space will be freed. 2026-03-31T20:14:56.243 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:56.243 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:56.243 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:56.243 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:56.250 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:56.251 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:56.251 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:56.251 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:56.252 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:56.252 INFO:teuthology.orchestra.run.vm06.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:56.253 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:56.253 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:56.267 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:14:56.267 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:56.268 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse* 2026-03-31T20:14:56.270 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:14:56.271 INFO:teuthology.orchestra.run.vm06.stdout: ceph-fuse* 2026-03-31T20:14:56.417 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:56.417 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:56.421 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:56.421 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 2932 kB disk space will be freed. 2026-03-31T20:14:56.421 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:56.422 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:56.422 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:56.422 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:56.424 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:56.424 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:56.429 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:56.429 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 2932 kB disk space will be freed. 2026-03-31T20:14:56.429 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:56.429 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:56.429 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:56.429 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:56.442 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:56.449 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:14:56.457 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:56.458 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:56.575 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:56.575 INFO:teuthology.orchestra.run.vm05.stdout: jq kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libjq1 libonig5 2026-03-31T20:14:56.575 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj1 libsgutils2-2 sg3-utils sg3-utils-udev socat xmlstarlet 2026-03-31T20:14:56.575 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:56.585 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:14:56.585 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test* 2026-03-31T20:14:56.628 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:56.628 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:14:56.637 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:14:56.638 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:14:56.727 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:56.727 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:56.732 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:56.732 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 408 MB disk space will be freed. 2026-03-31T20:14:56.732 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:56.732 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:56.732 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:56.732 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:56.751 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:56.773 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:56.773 INFO:teuthology.orchestra.run.vm03.stdout: jq kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libjq1 libonig5 2026-03-31T20:14:56.773 INFO:teuthology.orchestra.run.vm03.stdout: libpmemobj1 libsgutils2-2 sg3-utils sg3-utils-udev socat xmlstarlet 2026-03-31T20:14:56.773 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:56.785 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:14:56.786 INFO:teuthology.orchestra.run.vm03.stdout: ceph-test* 2026-03-31T20:14:56.800 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:56.800 INFO:teuthology.orchestra.run.vm06.stdout: jq kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libjq1 libonig5 2026-03-31T20:14:56.800 INFO:teuthology.orchestra.run.vm06.stdout: libpmemobj1 libsgutils2-2 sg3-utils sg3-utils-udev socat xmlstarlet 2026-03-31T20:14:56.800 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:56.812 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:14:56.812 INFO:teuthology.orchestra.run.vm06.stdout: ceph-test* 2026-03-31T20:14:56.835 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:56.868 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-31T20:14:56.931 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:56.931 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:56.935 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:56.935 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 408 MB disk space will be freed. 2026-03-31T20:14:56.935 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:56.935 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:56.935 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:56.935 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:56.938 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:56.939 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:56.952 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:56.955 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:56.955 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:56.960 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:56.960 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 408 MB disk space will be freed. 2026-03-31T20:14:56.960 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:56.960 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:56.960 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:56.960 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:56.978 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:14:57.062 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-31T20:14:57.062 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-31T20:14:57.072 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:57.072 INFO:teuthology.orchestra.run.vm05.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:57.072 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:57.072 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:57.084 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:14:57.085 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume* 2026-03-31T20:14:57.144 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:57.145 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:14:57.169 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:14:57.170 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:14:57.220 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:57.220 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:57.221 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-31T20:14:57.221 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:57.224 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:57.224 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:57.228 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:57.228 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 836 kB disk space will be freed. 2026-03-31T20:14:57.229 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:57.229 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:57.229 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:57.229 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:57.236 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-31T20:14:57.238 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds* 2026-03-31T20:14:57.248 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:57.310 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:57.310 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:57.311 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:57.311 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:57.311 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:57.311 INFO:teuthology.orchestra.run.vm06.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:57.312 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:57.312 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:57.325 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:14:57.325 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:14:57.325 INFO:teuthology.orchestra.run.vm06.stdout: ceph-volume* 2026-03-31T20:14:57.326 INFO:teuthology.orchestra.run.vm03.stdout: ceph-volume* 2026-03-31T20:14:57.423 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:57.423 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 6851 kB disk space will be freed. 2026-03-31T20:14:57.430 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:57.431 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:57.465 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 126100 files and directories currently installed.) 2026-03-31T20:14:57.467 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mds (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:14:57.471 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:57.471 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:57.473 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:57.473 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:57.476 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:57.476 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 836 kB disk space will be freed. 2026-03-31T20:14:57.476 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:57.476 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:57.476 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:57.476 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:57.477 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:57.477 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 836 kB disk space will be freed. 2026-03-31T20:14:57.477 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:57.477 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:57.477 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:57.477 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:57.495 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:14:57.497 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:57.560 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:57.560 INFO:teuthology.orchestra.run.vm05.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:57.560 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:57.560 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:57.570 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:14:57.570 INFO:teuthology.orchestra.run.vm05.stdout: radosgw* 2026-03-31T20:14:57.639 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:57.639 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:14:57.675 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:14:57.676 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:14:57.702 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:57.703 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:57.707 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:57.707 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 98.3 MB disk space will be freed. 2026-03-31T20:14:57.707 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:57.707 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:57.707 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:57.707 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:57.725 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:57.768 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:57.769 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:57.769 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:57.769 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:57.781 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:14:57.781 INFO:teuthology.orchestra.run.vm03.stdout: radosgw* 2026-03-31T20:14:57.829 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:57.829 INFO:teuthology.orchestra.run.vm06.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:14:57.829 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:14:57.829 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:57.839 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:14:57.839 INFO:teuthology.orchestra.run.vm06.stdout: radosgw* 2026-03-31T20:14:57.904 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-31T20:14:57.913 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:57.914 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:57.923 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:57.923 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:57.927 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:57.928 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 98.3 MB disk space will be freed. 2026-03-31T20:14:57.928 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:57.928 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:57.928 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:57.928 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:57.947 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:57.974 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:57.974 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:57.978 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:14:57.978 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 98.3 MB disk space will be freed. 2026-03-31T20:14:57.978 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:57.978 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:57.978 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:57.978 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:57.996 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:14:58.001 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 126092 files and directories currently installed.) 2026-03-31T20:14:58.003 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-mds (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka1 librgw2 libsgutils2-2 libsqlite3-mod-ceph nvme-cli 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout: python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-natsort python3-portend python3-prettytable 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout: python3-psutil python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora python3-threadpoolctl python3-wcwidth python3-webob 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout: socat xmlstarlet 2026-03-31T20:14:58.049 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:58.061 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:14:58.062 INFO:teuthology.orchestra.run.vm05.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:58.062 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:58.062 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* python3-cephfs* python3-rados* 2026-03-31T20:14:58.062 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw* radosgw* 2026-03-31T20:14:58.065 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:58.065 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:14:58.179 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:14:58.179 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:14:58.184 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:58.184 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:58.184 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:58.185 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka1 librgw2 libsgutils2-2 libsqlite3-mod-ceph nvme-cli 2026-03-31T20:14:58.185 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-31T20:14:58.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-31T20:14:58.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:14:58.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:14:58.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-natsort python3-portend python3-prettytable 2026-03-31T20:14:58.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T20:14:58.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T20:14:58.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-wcwidth python3-webob 2026-03-31T20:14:58.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools 2026-03-31T20:14:58.185 INFO:teuthology.orchestra.run.vm03.stdout: socat xmlstarlet 2026-03-31T20:14:58.185 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:58.197 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:14:58.197 INFO:teuthology.orchestra.run.vm03.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:58.197 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:58.198 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* python3-cephfs* python3-rados* 2026-03-31T20:14:58.198 INFO:teuthology.orchestra.run.vm03.stdout: python3-rgw* radosgw* 2026-03-31T20:14:58.210 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:58.210 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:58.215 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 17 to remove and 50 not upgraded. 2026-03-31T20:14:58.215 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 961 MB disk space will be freed. 2026-03-31T20:14:58.215 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:58.215 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:58.215 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:58.215 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:58.233 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout: librdkafka1 librgw2 libsgutils2-2 libsqlite3-mod-ceph nvme-cli 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout: python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes python3-natsort python3-portend python3-prettytable 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout: python3-psutil python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-wcwidth python3-webob 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout: socat xmlstarlet 2026-03-31T20:14:58.320 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:58.328 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:14:58.329 INFO:teuthology.orchestra.run.vm06.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:58.329 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:58.329 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* python3-cephfs* python3-rados* 2026-03-31T20:14:58.329 INFO:teuthology.orchestra.run.vm06.stdout: python3-rgw* radosgw* 2026-03-31T20:14:58.342 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:58.342 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:58.346 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 17 to remove and 50 not upgraded. 2026-03-31T20:14:58.346 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 961 MB disk space will be freed. 2026-03-31T20:14:58.347 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:58.347 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:58.347 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:58.347 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:58.365 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:58.420 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:58.420 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:58.465 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:58.465 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:58.469 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 17 to remove and 50 not upgraded. 2026-03-31T20:14:58.469 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 961 MB disk space will be freed. 2026-03-31T20:14:58.469 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:58.469 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:58.470 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:58.470 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:58.488 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:14:58.553 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:58.553 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:58.553 INFO:teuthology.orchestra.run.vm05.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:58.553 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka1 librgw2 libsgutils2-2 libsqlite3-mod-ceph nvme-cli 2026-03-31T20:14:58.553 INFO:teuthology.orchestra.run.vm05.stdout: python-asyncssh-doc python3-asyncssh python3-cachetools python3-ceph-common 2026-03-31T20:14:58.554 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-31T20:14:58.554 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-31T20:14:58.554 INFO:teuthology.orchestra.run.vm05.stdout: python3-joblib python3-kubernetes python3-natsort python3-portend 2026-03-31T20:14:58.554 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable python3-psutil python3-repoze.lru 2026-03-31T20:14:58.554 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplejson 2026-03-31T20:14:58.554 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempora python3-threadpoolctl 2026-03-31T20:14:58.554 INFO:teuthology.orchestra.run.vm05.stdout: python3-wcwidth python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:58.554 INFO:teuthology.orchestra.run.vm05.stdout: sg3-utils sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:14:58.554 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:58.565 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:14:58.565 INFO:teuthology.orchestra.run.vm05.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:58.565 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:58.565 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* python3-rgw* radosgw* 2026-03-31T20:14:58.566 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:58.566 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:14:58.680 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:14:58.680 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:14:58.701 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:58.701 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:58.701 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:58.701 INFO:teuthology.orchestra.run.vm03.stdout: librdkafka1 librgw2 libsgutils2-2 libsqlite3-mod-ceph nvme-cli 2026-03-31T20:14:58.702 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python3-asyncssh python3-cachetools python3-ceph-common 2026-03-31T20:14:58.702 INFO:teuthology.orchestra.run.vm03.stdout: python3-cheroot python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-31T20:14:58.702 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-31T20:14:58.702 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-natsort python3-portend 2026-03-31T20:14:58.702 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-repoze.lru 2026-03-31T20:14:58.702 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplejson 2026-03-31T20:14:58.702 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempora python3-threadpoolctl 2026-03-31T20:14:58.702 INFO:teuthology.orchestra.run.vm03.stdout: python3-wcwidth python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:58.702 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:14:58.702 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:58.707 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:58.707 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:58.712 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 15 to remove and 50 not upgraded. 2026-03-31T20:14:58.712 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 960 MB disk space will be freed. 2026-03-31T20:14:58.712 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:58.712 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:58.712 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:58.712 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:58.715 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:14:58.715 INFO:teuthology.orchestra.run.vm03.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:58.715 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:58.716 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* python3-rgw* radosgw* 2026-03-31T20:14:58.730 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:58.822 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:58.822 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:58.822 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:58.823 INFO:teuthology.orchestra.run.vm06.stdout: librdkafka1 librgw2 libsgutils2-2 libsqlite3-mod-ceph nvme-cli 2026-03-31T20:14:58.823 INFO:teuthology.orchestra.run.vm06.stdout: python-asyncssh-doc python3-asyncssh python3-cachetools python3-ceph-common 2026-03-31T20:14:58.823 INFO:teuthology.orchestra.run.vm06.stdout: python3-cheroot python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-31T20:14:58.823 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-31T20:14:58.823 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-natsort python3-portend 2026-03-31T20:14:58.823 INFO:teuthology.orchestra.run.vm06.stdout: python3-prettytable python3-psutil python3-repoze.lru 2026-03-31T20:14:58.823 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplejson 2026-03-31T20:14:58.823 INFO:teuthology.orchestra.run.vm06.stdout: python3-sklearn python3-sklearn-lib python3-tempora python3-threadpoolctl 2026-03-31T20:14:58.823 INFO:teuthology.orchestra.run.vm06.stdout: python3-wcwidth python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:58.823 INFO:teuthology.orchestra.run.vm06.stdout: sg3-utils sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:14:58.823 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:58.834 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:14:58.834 INFO:teuthology.orchestra.run.vm06.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:58.834 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:58.834 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* python3-rgw* radosgw* 2026-03-31T20:14:58.862 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:58.862 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:58.866 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 15 to remove and 50 not upgraded. 2026-03-31T20:14:58.866 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 960 MB disk space will be freed. 2026-03-31T20:14:58.866 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:58.866 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:58.866 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:58.866 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:58.885 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:58.885 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:58.886 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:58.974 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:58.974 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:58.978 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 15 to remove and 50 not upgraded. 2026-03-31T20:14:58.978 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 960 MB disk space will be freed. 2026-03-31T20:14:58.978 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:58.978 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:58.978 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:58.978 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:58.984 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:58.985 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:14:58.997 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:14:59.003 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:59.003 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:59.003 INFO:teuthology.orchestra.run.vm05.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:59.004 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-31T20:14:59.004 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh python3-cachetools python3-ceph-argparse 2026-03-31T20:14:59.004 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-31T20:14:59.004 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-31T20:14:59.004 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-natsort 2026-03-31T20:14:59.004 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend python3-prettytable python3-psutil python3-repoze.lru 2026-03-31T20:14:59.004 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplejson 2026-03-31T20:14:59.004 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempora python3-threadpoolctl 2026-03-31T20:14:59.004 INFO:teuthology.orchestra.run.vm05.stdout: python3-wcwidth python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:59.004 INFO:teuthology.orchestra.run.vm05.stdout: sg3-utils sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:14:59.004 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:59.017 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:14:59.017 INFO:teuthology.orchestra.run.vm05.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:59.017 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:59.018 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* python3-cephfs* radosgw* 2026-03-31T20:14:59.132 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:59.132 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:59.132 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:59.132 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-31T20:14:59.132 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh python3-cachetools python3-ceph-argparse 2026-03-31T20:14:59.133 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-31T20:14:59.133 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-31T20:14:59.133 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-natsort 2026-03-31T20:14:59.133 INFO:teuthology.orchestra.run.vm03.stdout: python3-portend python3-prettytable python3-psutil python3-repoze.lru 2026-03-31T20:14:59.133 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplejson 2026-03-31T20:14:59.133 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempora python3-threadpoolctl 2026-03-31T20:14:59.133 INFO:teuthology.orchestra.run.vm03.stdout: python3-wcwidth python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:59.133 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:14:59.133 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:59.142 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:14:59.142 INFO:teuthology.orchestra.run.vm03.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:59.142 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:59.143 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* python3-cephfs* radosgw* 2026-03-31T20:14:59.160 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:59.160 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:59.164 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 15 to remove and 50 not upgraded. 2026-03-31T20:14:59.164 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 960 MB disk space will be freed. 2026-03-31T20:14:59.164 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:59.164 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:59.164 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:59.164 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:59.182 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:59.189 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:14:59.189 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:14:59.276 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:59.277 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:59.281 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 15 to remove and 50 not upgraded. 2026-03-31T20:14:59.281 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 960 MB disk space will be freed. 2026-03-31T20:14:59.281 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:59.281 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:59.281 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:59.281 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:59.300 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:59.306 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:59.306 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:59.307 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:59.307 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-31T20:14:59.307 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh python3-cachetools python3-ceph-argparse 2026-03-31T20:14:59.307 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-31T20:14:59.307 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-31T20:14:59.307 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-natsort 2026-03-31T20:14:59.307 INFO:teuthology.orchestra.run.vm06.stdout: python3-portend python3-prettytable python3-psutil python3-repoze.lru 2026-03-31T20:14:59.307 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplejson 2026-03-31T20:14:59.307 INFO:teuthology.orchestra.run.vm06.stdout: python3-sklearn python3-sklearn-lib python3-tempora python3-threadpoolctl 2026-03-31T20:14:59.307 INFO:teuthology.orchestra.run.vm06.stdout: python3-wcwidth python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:59.307 INFO:teuthology.orchestra.run.vm06.stdout: sg3-utils sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:14:59.307 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:59.316 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:14:59.316 INFO:teuthology.orchestra.run.vm06.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:59.316 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:59.316 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* python3-cephfs* radosgw* 2026-03-31T20:14:59.368 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:59.369 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:59.436 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:59.436 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:14:59.451 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:59.451 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:59.455 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 15 to remove and 50 not upgraded. 2026-03-31T20:14:59.455 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 960 MB disk space will be freed. 2026-03-31T20:14:59.455 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:59.455 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:59.455 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:59.455 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:59.474 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:14:59.494 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:59.512 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:59.512 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:59.513 INFO:teuthology.orchestra.run.vm05.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:59.513 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-31T20:14:59.513 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh python3-cachetools python3-ceph-common python3-cheroot 2026-03-31T20:14:59.513 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-31T20:14:59.513 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-31T20:14:59.513 INFO:teuthology.orchestra.run.vm05.stdout: python3-joblib python3-kubernetes python3-natsort python3-portend 2026-03-31T20:14:59.513 INFO:teuthology.orchestra.run.vm05.stdout: python3-prettytable python3-psutil python3-repoze.lru 2026-03-31T20:14:59.513 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplejson 2026-03-31T20:14:59.513 INFO:teuthology.orchestra.run.vm05.stdout: python3-sklearn python3-sklearn-lib python3-tempora python3-threadpoolctl 2026-03-31T20:14:59.513 INFO:teuthology.orchestra.run.vm05.stdout: python3-wcwidth python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:59.513 INFO:teuthology.orchestra.run.vm05.stdout: sg3-utils sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:14:59.513 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:59.522 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:14:59.522 INFO:teuthology.orchestra.run.vm05.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:59.522 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:59.523 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* python3-rbd* radosgw* 2026-03-31T20:14:59.529 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-31T20:14:59.587 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:59.587 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:59.587 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:59.587 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-31T20:14:59.587 INFO:teuthology.orchestra.run.vm03.stdout: python3-asyncssh python3-cachetools python3-ceph-common python3-cheroot 2026-03-31T20:14:59.587 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-31T20:14:59.587 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-31T20:14:59.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-natsort python3-portend 2026-03-31T20:14:59.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-repoze.lru 2026-03-31T20:14:59.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplejson 2026-03-31T20:14:59.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempora python3-threadpoolctl 2026-03-31T20:14:59.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-wcwidth python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:59.588 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:14:59.588 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:59.598 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:14:59.598 INFO:teuthology.orchestra.run.vm03.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:59.598 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:59.598 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* python3-rbd* radosgw* 2026-03-31T20:14:59.661 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:14:59.661 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:14:59.666 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:59.666 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:59.670 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 15 to remove and 50 not upgraded. 2026-03-31T20:14:59.670 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 961 MB disk space will be freed. 2026-03-31T20:14:59.670 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:59.670 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:59.670 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:59.670 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:59.688 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:14:59.715 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-31T20:14:59.716 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-31T20:14:59.743 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:14:59.743 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:14:59.748 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 15 to remove and 50 not upgraded. 2026-03-31T20:14:59.748 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 961 MB disk space will be freed. 2026-03-31T20:14:59.748 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:14:59.748 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:14:59.748 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:14:59.748 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:14:59.766 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:14:59.839 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:59.839 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:14:59.840 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:14:59.840 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-31T20:14:59.840 INFO:teuthology.orchestra.run.vm06.stdout: python3-asyncssh python3-cachetools python3-ceph-common python3-cheroot 2026-03-31T20:14:59.840 INFO:teuthology.orchestra.run.vm06.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-31T20:14:59.840 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-31T20:14:59.840 INFO:teuthology.orchestra.run.vm06.stdout: python3-joblib python3-kubernetes python3-natsort python3-portend 2026-03-31T20:14:59.840 INFO:teuthology.orchestra.run.vm06.stdout: python3-prettytable python3-psutil python3-repoze.lru 2026-03-31T20:14:59.840 INFO:teuthology.orchestra.run.vm06.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplejson 2026-03-31T20:14:59.840 INFO:teuthology.orchestra.run.vm06.stdout: python3-sklearn python3-sklearn-lib python3-tempora python3-threadpoolctl 2026-03-31T20:14:59.840 INFO:teuthology.orchestra.run.vm06.stdout: python3-wcwidth python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:59.840 INFO:teuthology.orchestra.run.vm06.stdout: sg3-utils sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:14:59.840 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:59.853 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:14:59.853 INFO:teuthology.orchestra.run.vm06.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:14:59.853 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:14:59.853 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* python3-rbd* radosgw* 2026-03-31T20:14:59.865 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:14:59.865 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-31T20:14:59.866 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-31T20:14:59.866 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh python3-cachetools python3-cheroot python3-cherrypy3 2026-03-31T20:14:59.866 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:14:59.866 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:14:59.866 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes python3-natsort python3-portend python3-psutil 2026-03-31T20:14:59.866 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-31T20:14:59.866 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-sklearn python3-sklearn-lib python3-tempora 2026-03-31T20:14:59.866 INFO:teuthology.orchestra.run.vm01.stdout: python3-threadpoolctl python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:14:59.866 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils sg3-utils-udev 2026-03-31T20:14:59.866 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:14:59.876 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-31T20:14:59.877 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-31T20:14:59.877 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-k8sevents* 2026-03-31T20:14:59.888 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:14:59.889 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:14:59.962 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:14:59.962 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:15:00.007 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:00.008 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:00.012 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 15 to remove and 50 not upgraded. 2026-03-31T20:15:00.012 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 961 MB disk space will be freed. 2026-03-31T20:15:00.012 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:00.012 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:00.012 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:00.012 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:00.030 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout: libboost-thread1.74.0 libcephfs-proxy2 libjq1 liboath0 libonig5 libpmemobj1 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 libsgutils2-2 libsqlite3-mod-ceph nvme-cli 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout: python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-natsort python3-portend python3-prettytable 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout: python3-psutil python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora python3-threadpoolctl python3-wcwidth python3-webob 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout: socat xmlstarlet 2026-03-31T20:15:00.031 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:00.033 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:15:00.040 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:15:00.040 INFO:teuthology.orchestra.run.vm05.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:15:00.040 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:15:00.040 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* libcephfs-daemon* libcephfs-dev* 2026-03-31T20:15:00.040 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2* python3-cephfs* radosgw* 2026-03-31T20:15:00.044 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 4 to remove and 50 not upgraded. 2026-03-31T20:15:00.044 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 219 MB disk space will be freed. 2026-03-31T20:15:00.084 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 126092 files and directories currently installed.) 2026-03-31T20:15:00.086 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-k8sevents (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:15:00.096 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-diskprediction-local (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libcephfs-proxy2 libjq1 liboath0 libonig5 libpmemobj1 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 libsgutils2-2 libsqlite3-mod-ceph nvme-cli 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-natsort python3-portend python3-prettytable 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-wcwidth python3-webob 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout: socat xmlstarlet 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:00.114 INFO:teuthology.orchestra.run.vm01.stdout:dpkg: warning: while removing ceph-mgr-diskprediction-local, directory '/usr/share/ceph/mgr/diskprediction_local' not empty so not removed 2026-03-31T20:15:00.124 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr-dashboard (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:15:00.126 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:15:00.126 INFO:teuthology.orchestra.run.vm03.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:15:00.126 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:15:00.126 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* libcephfs-daemon* libcephfs-dev* 2026-03-31T20:15:00.127 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs2* python3-cephfs* radosgw* 2026-03-31T20:15:00.178 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:00.179 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:00.182 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 18 to remove and 50 not upgraded. 2026-03-31T20:15:00.182 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 963 MB disk space will be freed. 2026-03-31T20:15:00.183 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:00.183 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:00.183 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:00.183 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:00.189 INFO:teuthology.orchestra.run.vm01.stdout:dpkg: warning: while removing ceph-mgr-dashboard, directory '/usr/share/ceph/mgr/dashboard/services/auth' not empty so not removed 2026-03-31T20:15:00.190 INFO:teuthology.orchestra.run.vm01.stdout:dpkg: warning: while removing ceph-mgr-dashboard, directory '/usr/share/ceph/mgr/dashboard/plugins' not empty so not removed 2026-03-31T20:15:00.190 INFO:teuthology.orchestra.run.vm01.stdout:dpkg: warning: while removing ceph-mgr-dashboard, directory '/usr/share/ceph/mgr/dashboard/model' not empty so not removed 2026-03-31T20:15:00.190 INFO:teuthology.orchestra.run.vm01.stdout:dpkg: warning: while removing ceph-mgr-dashboard, directory '/usr/share/ceph/mgr/dashboard/controllers' not empty so not removed 2026-03-31T20:15:00.190 INFO:teuthology.orchestra.run.vm01.stdout:dpkg: warning: while removing ceph-mgr-dashboard, directory '/usr/share/ceph/mgr/dashboard/api' not empty so not removed 2026-03-31T20:15:00.200 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mgr (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:15:00.200 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:15:00.200 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:15:00.201 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:15:00.268 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:00.268 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:00.272 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 18 to remove and 50 not upgraded. 2026-03-31T20:15:00.272 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 963 MB disk space will be freed. 2026-03-31T20:15:00.272 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:00.272 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:00.272 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:00.272 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:00.291 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libcephfs-proxy2 libjq1 liboath0 libonig5 libpmemobj1 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout: libradosstriper1 libsgutils2-2 libsqlite3-mod-ceph nvme-cli 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout: python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes python3-natsort python3-portend python3-prettytable 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout: python3-psutil python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-wcwidth python3-webob 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout: socat xmlstarlet 2026-03-31T20:15:00.321 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:00.330 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:15:00.331 INFO:teuthology.orchestra.run.vm06.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:15:00.331 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:15:00.331 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* libcephfs-daemon* libcephfs-dev* 2026-03-31T20:15:00.331 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs2* python3-cephfs* radosgw* 2026-03-31T20:15:00.365 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:15:00.365 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:15:00.427 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:15:00.427 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:15:00.467 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:00.467 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:00.471 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 18 to remove and 50 not upgraded. 2026-03-31T20:15:00.471 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 963 MB disk space will be freed. 2026-03-31T20:15:00.471 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:00.471 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:00.471 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:00.471 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:00.484 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:00.484 INFO:teuthology.orchestra.run.vm05.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:15:00.485 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:15:00.485 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:00.489 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:15:00.501 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:15:00.502 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-dev* 2026-03-31T20:15:00.569 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:00.569 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:15:00.569 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:15:00.569 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:00.579 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:15:00.579 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-dev* 2026-03-31T20:15:00.644 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:00.644 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:00.648 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:15:00.648 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 216 kB disk space will be freed. 2026-03-31T20:15:00.648 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:00.648 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:00.648 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:00.648 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:00.667 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:15:00.683 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 124310 files and directories currently installed.) 2026-03-31T20:15:00.684 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:15:00.685 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:15:00.685 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for ceph-mgr (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:15:00.720 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:00.720 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:00.724 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:15:00.724 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 216 kB disk space will be freed. 2026-03-31T20:15:00.724 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:00.724 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:00.724 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:00.724 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:00.742 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:15:00.808 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:00.808 INFO:teuthology.orchestra.run.vm06.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:15:00.809 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:15:00.809 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:00.819 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:15:00.819 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-dev* 2026-03-31T20:15:00.854 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:15:00.855 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:15:00.928 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:15:00.929 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:15:00.955 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:00.955 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:00.960 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:15:00.960 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 216 kB disk space will be freed. 2026-03-31T20:15:00.960 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:00.960 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:00.960 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:00.960 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:00.976 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:00.977 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:15:00.977 INFO:teuthology.orchestra.run.vm05.stdout: libboost-thread1.74.0 libcephfs-proxy2 libdouble-conversion3 libfuse2 2026-03-31T20:15:00.977 INFO:teuthology.orchestra.run.vm05.stdout: libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 2026-03-31T20:15:00.977 INFO:teuthology.orchestra.run.vm05.stdout: libnbd0 liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-31T20:15:00.978 INFO:teuthology.orchestra.run.vm05.stdout: libqt5network5 librdkafka1 libsgutils2-2 libthrift-0.16.0 nvme-cli 2026-03-31T20:15:00.978 INFO:teuthology.orchestra.run.vm05.stdout: python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-31T20:15:00.978 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-31T20:15:00.978 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:15:00.978 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:15:00.978 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-natsort python3-portend python3-prettytable 2026-03-31T20:15:00.978 INFO:teuthology.orchestra.run.vm05.stdout: python3-psutil python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T20:15:00.978 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T20:15:00.978 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora python3-threadpoolctl python3-wcwidth python3-webob 2026-03-31T20:15:00.978 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-zc.lockfile qttranslations5-l10n sg3-utils 2026-03-31T20:15:00.978 INFO:teuthology.orchestra.run.vm05.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:15:00.978 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:00.978 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:15:00.993 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:15:00.993 INFO:teuthology.orchestra.run.vm05.stdout: ceph* ceph-base* ceph-common* ceph-fuse* ceph-mds* ceph-mgr* 2026-03-31T20:15:00.993 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-31T20:15:00.993 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-k8sevents* ceph-mon* ceph-osd* ceph-test* ceph-volume* 2026-03-31T20:15:00.993 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-daemon* libcephfs-dev* libcephfs2* librados2* libradosstriper1* 2026-03-31T20:15:00.993 INFO:teuthology.orchestra.run.vm05.stdout: librbd1* librgw2* libsqlite3-mod-ceph* python3-cephfs* python3-rados* 2026-03-31T20:15:00.993 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd* python3-rgw* qemu-block-extra* radosgw* rbd-fuse* 2026-03-31T20:15:01.073 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:01.073 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:15:01.073 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libcephfs-proxy2 libdouble-conversion3 libfuse2 2026-03-31T20:15:01.073 INFO:teuthology.orchestra.run.vm03.stdout: libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 2026-03-31T20:15:01.073 INFO:teuthology.orchestra.run.vm03.stdout: libnbd0 liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-31T20:15:01.074 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libsgutils2-2 libthrift-0.16.0 nvme-cli 2026-03-31T20:15:01.074 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-31T20:15:01.074 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-31T20:15:01.074 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:15:01.074 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:15:01.074 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-natsort python3-portend python3-prettytable 2026-03-31T20:15:01.074 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T20:15:01.074 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T20:15:01.074 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-wcwidth python3-webob 2026-03-31T20:15:01.074 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-zc.lockfile qttranslations5-l10n sg3-utils 2026-03-31T20:15:01.074 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:15:01.074 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:01.084 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:15:01.084 INFO:teuthology.orchestra.run.vm03.stdout: ceph* ceph-base* ceph-common* ceph-fuse* ceph-mds* ceph-mgr* 2026-03-31T20:15:01.084 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-31T20:15:01.084 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents* ceph-mon* ceph-osd* ceph-test* ceph-volume* 2026-03-31T20:15:01.084 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-daemon* libcephfs-dev* libcephfs2* librados2* libradosstriper1* 2026-03-31T20:15:01.084 INFO:teuthology.orchestra.run.vm03.stdout: librbd1* librgw2* libsqlite3-mod-ceph* python3-cephfs* python3-rados* 2026-03-31T20:15:01.084 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd* python3-rgw* qemu-block-extra* radosgw* rbd-fuse* 2026-03-31T20:15:01.095 INFO:teuthology.orchestra.run.vm01.stdout:dpkg: warning: while removing ceph-mgr, directory '/var/lib/ceph/mgr' not empty so not removed 2026-03-31T20:15:01.145 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:01.145 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:01.149 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 29 to remove and 50 not upgraded. 2026-03-31T20:15:01.149 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 1028 MB disk space will be freed. 2026-03-31T20:15:01.149 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:01.149 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:01.149 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:01.149 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:01.169 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:15:01.170 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:15:01.170 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:15:01.224 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:01.224 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:01.229 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 29 to remove and 50 not upgraded. 2026-03-31T20:15:01.229 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1028 MB disk space will be freed. 2026-03-31T20:15:01.229 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:01.229 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - ~LZMAFILE (28: No space left on device) 2026-03-31T20:15:01.229 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:01.229 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:01.229 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:01.248 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:15:01.285 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:15:01.286 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libcephfs-proxy2 libdouble-conversion3 libfuse2 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: libnbd0 liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: libqt5network5 librdkafka1 libsgutils2-2 libthrift-0.16.0 nvme-cli 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: python3-ceph-argparse python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes python3-natsort python3-portend python3-prettytable 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: python3-psutil python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-wcwidth python3-webob 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket python3-zc.lockfile qttranslations5-l10n sg3-utils 2026-03-31T20:15:01.313 INFO:teuthology.orchestra.run.vm06.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:15:01.314 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:01.327 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:15:01.327 INFO:teuthology.orchestra.run.vm06.stdout: ceph* ceph-base* ceph-common* ceph-fuse* ceph-mds* ceph-mgr* 2026-03-31T20:15:01.327 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-cephadm* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-31T20:15:01.327 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-k8sevents* ceph-mon* ceph-osd* ceph-test* ceph-volume* 2026-03-31T20:15:01.327 INFO:teuthology.orchestra.run.vm06.stdout: libcephfs-daemon* libcephfs-dev* libcephfs2* librados2* libradosstriper1* 2026-03-31T20:15:01.328 INFO:teuthology.orchestra.run.vm06.stdout: librbd1* librgw2* libsqlite3-mod-ceph* python3-cephfs* python3-rados* 2026-03-31T20:15:01.328 INFO:teuthology.orchestra.run.vm06.stdout: python3-rbd* python3-rgw* qemu-block-extra* radosgw* rbd-fuse* 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout: libboost-thread1.74.0 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 libiscsi7 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout: libjq1 libnbd0 liboath0 libonig5 libpmemobj1 libradosstriper1 libsgutils2-2 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout: libsqlite3-mod-ceph nvme-cli python-asyncssh-doc python3-asyncssh 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes python3-natsort python3-portend python3-prettytable 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout: python3-psutil python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora python3-threadpoolctl python3-wcwidth python3-webob 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout: socat xmlstarlet 2026-03-31T20:15:01.427 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:01.436 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:15:01.436 INFO:teuthology.orchestra.run.vm05.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:15:01.436 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:15:01.436 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* librbd1* python3-rbd* 2026-03-31T20:15:01.437 INFO:teuthology.orchestra.run.vm05.stdout: qemu-block-extra* radosgw* rbd-fuse* 2026-03-31T20:15:01.443 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:15:01.443 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:15:01.472 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:01.472 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:01.477 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 29 to remove and 50 not upgraded. 2026-03-31T20:15:01.477 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 1028 MB disk space will be freed. 2026-03-31T20:15:01.477 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:01.477 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:01.477 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:01.477 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:01.495 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:15:01.575 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:01.576 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:01.578 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:01.578 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:15:01.578 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 libiscsi7 2026-03-31T20:15:01.578 INFO:teuthology.orchestra.run.vm03.stdout: libjq1 libnbd0 liboath0 libonig5 libpmemobj1 libradosstriper1 libsgutils2-2 2026-03-31T20:15:01.578 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph nvme-cli python-asyncssh-doc python3-asyncssh 2026-03-31T20:15:01.578 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-31T20:15:01.578 INFO:teuthology.orchestra.run.vm03.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:15:01.578 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:15:01.578 INFO:teuthology.orchestra.run.vm03.stdout: python3-kubernetes python3-natsort python3-portend python3-prettytable 2026-03-31T20:15:01.578 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T20:15:01.578 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T20:15:01.578 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-wcwidth python3-webob 2026-03-31T20:15:01.579 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools 2026-03-31T20:15:01.579 INFO:teuthology.orchestra.run.vm03.stdout: socat xmlstarlet 2026-03-31T20:15:01.579 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:01.580 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 18 to remove and 50 not upgraded. 2026-03-31T20:15:01.580 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 972 MB disk space will be freed. 2026-03-31T20:15:01.580 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:01.580 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:01.580 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:01.580 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:01.590 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:15:01.590 INFO:teuthology.orchestra.run.vm03.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:15:01.590 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:15:01.591 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* librbd1* python3-rbd* 2026-03-31T20:15:01.591 INFO:teuthology.orchestra.run.vm03.stdout: qemu-block-extra* radosgw* rbd-fuse* 2026-03-31T20:15:01.599 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:15:01.691 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:15:01.691 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:15:01.738 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:01.738 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:01.742 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 18 to remove and 50 not upgraded. 2026-03-31T20:15:01.742 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 972 MB disk space will be freed. 2026-03-31T20:15:01.742 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:01.742 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - ~LZMAFILE (28: No space left on device) 2026-03-31T20:15:01.742 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:01.742 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:01.742 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:01.760 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:15:01.791 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:15:01.791 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout: libboost-thread1.74.0 libgfapi0 libgfrpc0 libgfxdr0 libglusterfs0 libiscsi7 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout: libjq1 libnbd0 liboath0 libonig5 libpmemobj1 libradosstriper1 libsgutils2-2 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout: libsqlite3-mod-ceph nvme-cli python-asyncssh-doc python3-asyncssh 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout: python3-cachetools python3-ceph-common python3-cheroot python3-cherrypy3 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout: python3-google-auth python3-jaraco.classes python3-jaraco.collections 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout: python3-kubernetes python3-natsort python3-portend python3-prettytable 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout: python3-psutil python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout: python3-tempora python3-threadpoolctl python3-wcwidth python3-webob 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout: python3-websocket python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout: socat xmlstarlet 2026-03-31T20:15:01.846 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:01.858 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:15:01.858 INFO:teuthology.orchestra.run.vm06.stdout: ceph* ceph-base* ceph-common* ceph-mds* ceph-mgr* ceph-mgr-cephadm* 2026-03-31T20:15:01.858 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mgr-dashboard* ceph-mgr-diskprediction-local* ceph-mgr-k8sevents* 2026-03-31T20:15:01.858 INFO:teuthology.orchestra.run.vm06.stdout: ceph-mon* ceph-osd* ceph-test* ceph-volume* librbd1* python3-rbd* 2026-03-31T20:15:01.859 INFO:teuthology.orchestra.run.vm06.stdout: qemu-block-extra* radosgw* rbd-fuse* 2026-03-31T20:15:01.930 INFO:teuthology.orchestra.run.vm05.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:01.930 INFO:teuthology.orchestra.run.vm05.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:15:01.930 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:15:01.930 INFO:teuthology.orchestra.run.vm05.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:01.941 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:15:01.941 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse* 2026-03-31T20:15:01.956 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:15:01.957 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:15:01.997 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:01.998 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:02.001 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 18 to remove and 50 not upgraded. 2026-03-31T20:15:02.001 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 972 MB disk space will be freed. 2026-03-31T20:15:02.001 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:02.001 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:02.001 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:02.001 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:02.020 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:15:02.078 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:02.078 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:02.082 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:15:02.082 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 286 kB disk space will be freed. 2026-03-31T20:15:02.082 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:02.082 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:02.082 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:02.082 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:02.084 DEBUG:teuthology.orchestra.run.vm05:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-31T20:15:02.086 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:02.086 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:15:02.086 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:15:02.086 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:02.096 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:15:02.097 INFO:teuthology.orchestra.run.vm03.stdout: rbd-fuse* 2026-03-31T20:15:02.141 DEBUG:teuthology.orchestra.run.vm05:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-31T20:15:02.197 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:15:02.198 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:15:02.206 INFO:teuthology.orchestra.run.vm05.stdout:Reading package lists... 2026-03-31T20:15:02.224 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:02.227 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:02.227 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:02.231 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:15:02.232 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 286 kB disk space will be freed. 2026-03-31T20:15:02.232 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:02.232 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - ~LZMAFILE (28: No space left on device) 2026-03-31T20:15:02.232 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:02.232 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:02.232 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:02.234 DEBUG:teuthology.orchestra.run.vm03:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-31T20:15:02.258 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-31T20:15:02.293 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-31T20:15:02.323 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:02.324 INFO:teuthology.orchestra.run.vm06.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:15:02.324 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:15:02.324 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:02.333 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:15:02.334 INFO:teuthology.orchestra.run.vm06.stdout: rbd-fuse* 2026-03-31T20:15:02.353 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-31T20:15:02.395 INFO:teuthology.orchestra.run.vm05.stdout:Building dependency tree... 2026-03-31T20:15:02.395 INFO:teuthology.orchestra.run.vm05.stdout:Reading state information... 2026-03-31T20:15:02.446 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-31T20:15:02.447 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-31T20:15:02.473 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:02.473 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:02.477 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 1 to remove and 50 not upgraded. 2026-03-31T20:15:02.477 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 286 kB disk space will be freed. 2026-03-31T20:15:02.478 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:02.478 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:02.478 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:02.478 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:02.480 DEBUG:teuthology.orchestra.run.vm06:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-31T20:15:02.521 INFO:teuthology.orchestra.run.vm05.stdout:The following packages will be REMOVED: 2026-03-31T20:15:02.521 INFO:teuthology.orchestra.run.vm05.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:15:02.521 INFO:teuthology.orchestra.run.vm05.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:15:02.536 DEBUG:teuthology.orchestra.run.vm06:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-31T20:15:02.548 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-31T20:15:02.548 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-31T20:15:02.597 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-31T20:15:02.600 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-31T20:15:02.600 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-31T20:15:02.600 INFO:teuthology.orchestra.run.vm01.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-31T20:15:02.601 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-31T20:15:02.601 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh python3-cachetools python3-ceph-common python3-cheroot 2026-03-31T20:15:02.601 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-31T20:15:02.601 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-31T20:15:02.601 INFO:teuthology.orchestra.run.vm01.stdout: python3-joblib python3-kubernetes python3-natsort python3-portend 2026-03-31T20:15:02.601 INFO:teuthology.orchestra.run.vm01.stdout: python3-prettytable python3-psutil python3-repoze.lru 2026-03-31T20:15:02.601 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplejson 2026-03-31T20:15:02.601 INFO:teuthology.orchestra.run.vm01.stdout: python3-sklearn python3-sklearn-lib python3-tempora python3-threadpoolctl 2026-03-31T20:15:02.601 INFO:teuthology.orchestra.run.vm01.stdout: python3-wcwidth python3-webob python3-websocket python3-zc.lockfile 2026-03-31T20:15:02.601 INFO:teuthology.orchestra.run.vm01.stdout: sg3-utils sg3-utils-udev smartmontools socat xmlstarlet 2026-03-31T20:15:02.601 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-31T20:15:02.619 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be REMOVED: 2026-03-31T20:15:02.620 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-31T20:15:02.667 INFO:teuthology.orchestra.run.vm05.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:02.667 INFO:teuthology.orchestra.run.vm05.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:02.671 INFO:teuthology.orchestra.run.vm05.stdout:0 upgraded, 0 newly installed, 7 to remove and 50 not upgraded. 2026-03-31T20:15:02.671 INFO:teuthology.orchestra.run.vm05.stdout:After this operation, 8045 kB disk space will be freed. 2026-03-31T20:15:02.672 INFO:teuthology.orchestra.run.vm05.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:02.672 INFO:teuthology.orchestra.run.vm05.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:02.672 INFO:teuthology.orchestra.run.vm05.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:02.672 INFO:teuthology.orchestra.run.vm05.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:02.673 DEBUG:teuthology.orchestra.run:got remote process result: 100 2026-03-31T20:15:02.673 ERROR:teuthology.run_tasks:Manager failed: install Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/task/install/__init__.py", line 220, in install yield File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 32, in nested yield vars File "/home/teuthos/kshtsk/teuthology/teuthology/task/install/__init__.py", line 644, in task yield File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa/tasks/ceph.py", line 1996, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa/tasks/ceph.py", line 263, in ceph_log run.wait( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm03 with status 123: "time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose --" During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/kshtsk/teuthology/teuthology/task/install/__init__.py", line 640, in task with contextutil.nested( File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/kshtsk/teuthology/teuthology/task/install/__init__.py", line 222, in install remove_packages(ctx, config, package_list) File "/home/teuthos/kshtsk/teuthology/teuthology/task/install/__init__.py", line 103, in remove_packages with parallel() as p: File "/home/teuthos/kshtsk/teuthology/teuthology/parallel.py", line 84, in __exit__ for result in self: File "/home/teuthos/kshtsk/teuthology/teuthology/parallel.py", line 98, in __next__ resurrect_traceback(result) File "/home/teuthos/kshtsk/teuthology/teuthology/parallel.py", line 30, in resurrect_traceback raise exc.exc_info[1] File "/home/teuthos/kshtsk/teuthology/teuthology/parallel.py", line 23, in capture_traceback return func(*args, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/task/install/deb.py", line 154, in _remove remote.run( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/remote.py", line 596, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm05 with status 100: 'sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove' 2026-03-31T20:15:02.674 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-31T20:15:02.676 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-31T20:15:02.677 DEBUG:teuthology.orchestra.run.vm01:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-31T20:15:02.678 DEBUG:teuthology.orchestra.run.vm03:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-31T20:15:02.679 DEBUG:teuthology.orchestra.run.vm05:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-31T20:15:02.683 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-31T20:15:02.683 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:15:02.684 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:15:02.716 DEBUG:teuthology.orchestra.run.vm06:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-31T20:15:02.726 INFO:teuthology.orchestra.run.vm05.stdout: remote refid st t when poll reach delay offset jitter 2026-03-31T20:15:02.726 INFO:teuthology.orchestra.run.vm05.stdout:============================================================================== 2026-03-31T20:15:02.726 INFO:teuthology.orchestra.run.vm05.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:02.726 INFO:teuthology.orchestra.run.vm05.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:02.726 INFO:teuthology.orchestra.run.vm05.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:02.727 INFO:teuthology.orchestra.run.vm05.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:02.727 INFO:teuthology.orchestra.run.vm05.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:02.727 INFO:teuthology.orchestra.run.vm05.stdout:-kronos.mailus.d 131.188.3.222 2 u 57 128 377 25.041 -0.076 0.048 2026-03-31T20:15:02.727 INFO:teuthology.orchestra.run.vm05.stdout:-zeus.f5s.de 192.53.103.104 2 u 13 256 377 25.056 -0.442 0.115 2026-03-31T20:15:02.727 INFO:teuthology.orchestra.run.vm05.stdout:+141.144.246.224 169.254.169.254 4 u 65 128 377 20.824 -0.228 0.045 2026-03-31T20:15:02.727 INFO:teuthology.orchestra.run.vm05.stdout:*mail.gunnarhofm 192.53.103.108 2 u 130 128 377 24.993 -0.321 0.057 2026-03-31T20:15:02.727 INFO:teuthology.orchestra.run.vm05.stdout:+ns.gunnarhofman 237.17.204.95 2 u 79 128 377 24.998 -0.312 0.072 2026-03-31T20:15:02.782 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-31T20:15:02.782 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-31T20:15:02.789 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 7 to remove and 50 not upgraded. 2026-03-31T20:15:02.789 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 732 MB disk space will be freed. 2026-03-31T20:15:02.802 INFO:teuthology.orchestra.run.vm01.stdout: remote refid st t when poll reach delay offset jitter 2026-03-31T20:15:02.802 INFO:teuthology.orchestra.run.vm01.stdout:============================================================================== 2026-03-31T20:15:02.802 INFO:teuthology.orchestra.run.vm01.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:02.802 INFO:teuthology.orchestra.run.vm01.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:02.802 INFO:teuthology.orchestra.run.vm01.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:02.802 INFO:teuthology.orchestra.run.vm01.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:02.802 INFO:teuthology.orchestra.run.vm01.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:02.802 INFO:teuthology.orchestra.run.vm01.stdout:+mail.gunnarhofm 192.53.103.108 2 u 7 128 377 25.045 -0.407 0.237 2026-03-31T20:15:02.802 INFO:teuthology.orchestra.run.vm01.stdout:+141.144.246.224 169.254.169.254 4 u 14 128 377 20.911 -0.732 0.244 2026-03-31T20:15:02.802 INFO:teuthology.orchestra.run.vm01.stdout:*217.217.243.78 193.79.237.14 2 u 130 128 377 27.525 -0.373 0.384 2026-03-31T20:15:02.802 INFO:teuthology.orchestra.run.vm01.stdout:-ns.gunnarhofman 237.17.204.95 2 u 1 128 377 24.967 -0.741 0.270 2026-03-31T20:15:02.802 INFO:teuthology.orchestra.run.vm01.stdout:-185.125.190.57 99.220.8.133 2 u 41 128 377 29.633 +0.064 2.000 2026-03-31T20:15:02.817 INFO:teuthology.orchestra.run.vm03.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:02.817 INFO:teuthology.orchestra.run.vm03.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:02.821 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 7 to remove and 50 not upgraded. 2026-03-31T20:15:02.821 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 8045 kB disk space will be freed. 2026-03-31T20:15:02.822 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:02.822 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - ~LZMAFILE (28: No space left on device) 2026-03-31T20:15:02.822 INFO:teuthology.orchestra.run.vm03.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:02.822 INFO:teuthology.orchestra.run.vm03.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:02.822 INFO:teuthology.orchestra.run.vm03.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:02.823 DEBUG:teuthology.orchestra.run:got remote process result: 100 2026-03-31T20:15:02.831 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 124309 files and directories currently installed.) 2026-03-31T20:15:02.833 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-volume (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:15:02.890 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-osd (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:15:02.897 INFO:teuthology.orchestra.run.vm06.stdout:The following packages will be REMOVED: 2026-03-31T20:15:02.897 INFO:teuthology.orchestra.run.vm06.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-31T20:15:02.897 INFO:teuthology.orchestra.run.vm06.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-31T20:15:03.021 INFO:teuthology.orchestra.run.vm06.stderr:dpkg: unrecoverable fatal error, aborting: 2026-03-31T20:15:03.021 INFO:teuthology.orchestra.run.vm06.stderr: unable to fill /var/lib/dpkg/updates/tmp.i with padding: No space left on device 2026-03-31T20:15:03.024 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 7 to remove and 50 not upgraded. 2026-03-31T20:15:03.025 INFO:teuthology.orchestra.run.vm06.stdout:After this operation, 8045 kB disk space will be freed. 2026-03-31T20:15:03.025 INFO:teuthology.orchestra.run.vm06.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-31T20:15:03.025 INFO:teuthology.orchestra.run.vm06.stderr:E: Write error - write (28: No space left on device) 2026-03-31T20:15:03.025 INFO:teuthology.orchestra.run.vm06.stderr:E: Sub-process dpkg --set-selections returned an error code (2) 2026-03-31T20:15:03.025 INFO:teuthology.orchestra.run.vm06.stderr:E: Couldn't record the approved state changes as dpkg selection states 2026-03-31T20:15:03.026 DEBUG:teuthology.orchestra.run:got remote process result: 100 2026-03-31T20:15:03.292 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-mon (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:15:03.673 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-base (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:15:04.079 INFO:teuthology.orchestra.run.vm01.stdout:Removing radosgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:15:04.472 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-test (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:15:04.540 INFO:teuthology.orchestra.run.vm01.stdout:Removing ceph-common (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout: remote refid st t when poll reach delay offset jitter 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout:============================================================================== 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout:*sambuca.psychon 174.222.245.115 2 u 37 256 377 25.028 -5.655 0.676 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout:+ns.gunnarhofman 237.17.204.95 2 u 263 256 377 24.960 -5.113 0.773 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout:+zeus.f5s.de 192.53.103.104 2 u 7 256 377 24.995 -5.468 0.627 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout:+mail.gunnarhofm 192.53.103.108 2 u 75 256 377 25.038 -6.028 0.977 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout:+141.144.246.224 169.254.169.254 4 u 32 256 377 20.829 -5.418 0.518 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout:+185.125.190.58 99.220.8.133 2 u 239 256 377 29.585 -4.941 1.561 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm06.stdout:+185.125.190.57 99.220.8.133 2 u 7 256 377 29.557 -5.303 1.047 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout:-kronos.mailus.d 131.188.3.222 2 u 123 128 377 25.110 +3.887 0.238 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout:+zeus.f5s.de 192.53.103.104 2 u 18 128 377 25.012 +3.774 0.220 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout:+dc8wan.de 237.17.204.95 2 u 15 128 377 25.039 +4.122 0.237 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout:-sambuca.psychon 174.222.245.115 2 u 13 128 377 25.009 +3.240 0.227 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout:-mail.gunnarhofm 192.53.103.108 2 u 122 128 377 25.045 +3.730 0.246 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout:-ns.gunnarhofman 237.17.204.95 2 u 8 128 377 24.967 +3.753 0.249 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout:*185.125.190.58 99.220.8.133 2 u 63 128 377 30.813 +4.114 3.247 2026-03-31T20:15:04.626 INFO:teuthology.orchestra.run.vm03.stdout:+141.144.246.224 169.254.169.254 4 u 61 128 377 20.821 +4.115 0.254 2026-03-31T20:15:04.627 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-31T20:15:04.629 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-31T20:15:04.629 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-31T20:15:04.631 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-31T20:15:04.633 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-31T20:15:04.635 INFO:teuthology.task.internal:Duration was 4033.993720 seconds 2026-03-31T20:15:04.635 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-31T20:15:04.637 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-31T20:15:04.637 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-31T20:15:04.638 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-31T20:15:04.639 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-31T20:15:04.640 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-31T20:15:04.766 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-31T20:15:04.766 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm01.local 2026-03-31T20:15:04.766 DEBUG:teuthology.orchestra.run.vm01:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-31T20:15:04.778 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm03.local 2026-03-31T20:15:04.778 DEBUG:teuthology.orchestra.run.vm03:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-31T20:15:04.787 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm05.local 2026-03-31T20:15:04.787 DEBUG:teuthology.orchestra.run.vm05:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-31T20:15:04.795 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm06.local 2026-03-31T20:15:04.795 DEBUG:teuthology.orchestra.run.vm06:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-31T20:15:04.802 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-31T20:15:04.802 DEBUG:teuthology.orchestra.run.vm01:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-31T20:15:04.825 DEBUG:teuthology.orchestra.run.vm03:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-31T20:15:04.830 DEBUG:teuthology.orchestra.run.vm05:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-31T20:15:04.836 DEBUG:teuthology.orchestra.run.vm06:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-31T20:15:04.873 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-31T20:15:04.874 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-31T20:15:04.886 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-31T20:15:04.892 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-31T20:15:04.892 INFO:teuthology.orchestra.run.vm01.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-31T20:15:04.892 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-31T20:15:04.892 INFO:teuthology.orchestra.run.vm01.stderr: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-31T20:15:04.892 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-31T20:15:04.895 INFO:teuthology.orchestra.run.vm01.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 85.3% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-31T20:15:04.902 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-31T20:15:04.904 DEBUG:teuthology.orchestra.run.vm06:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-31T20:15:04.908 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-31T20:15:04.909 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-31T20:15:04.909 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 2026-03-31T20:15:04.909 INFO:teuthology.orchestra.run.vm03.stderr:gzip: /home/ubuntu/cephtest/archive/syslog/kern.log.gz: No space left on device 2026-03-31T20:15:04.909 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-31T20:15:04.909 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 2026-03-31T20:15:04.909 INFO:teuthology.orchestra.run.vm03.stderr:gzip: /home/ubuntu/cephtest/archive/syslog/misc.log.gz: No space left on device 2026-03-31T20:15:04.910 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 2026-03-31T20:15:04.910 INFO:teuthology.orchestra.run.vm03.stderr:gzip: /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz: No space left on device 2026-03-31T20:15:04.910 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-31T20:15:04.911 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-31T20:15:04.911 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 2026-03-31T20:15:04.911 INFO:teuthology.orchestra.run.vm05.stderr:gzip: /home/ubuntu/cephtest/archive/syslog/kern.log.gz: No space left on device 2026-03-31T20:15:04.911 INFO:teuthology.orchestra.run.vm05.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-31T20:15:04.911 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 2026-03-31T20:15:04.911 INFO:teuthology.orchestra.run.vm05.stderr:gzip: /home/ubuntu/cephtest/archive/syslog/misc.log.gz: No space left on device 2026-03-31T20:15:04.912 INFO:teuthology.orchestra.run.vm05.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 2026-03-31T20:15:04.912 INFO:teuthology.orchestra.run.vm05.stderr:gzip: /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz: No space left on device 2026-03-31T20:15:04.915 DEBUG:teuthology.orchestra.run:got remote process result: 123 2026-03-31T20:15:04.915 ERROR:teuthology.run_tasks:Manager failed: internal.syslog Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/task/internal/syslog.py", line 76, in syslog yield File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa/tasks/ceph.py", line 1996, in task with contextutil.nested(*subtasks): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 54, in nested raise exc[1] File "/home/teuthos/kshtsk/teuthology/teuthology/contextutil.py", line 46, in nested if exit(*exc): File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa/tasks/ceph.py", line 263, in ceph_log run.wait( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm03 with status 123: "time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose --" During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/teuthos/kshtsk/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 153, in __exit__ self.gen.throw(typ, value, traceback) File "/home/teuthos/kshtsk/teuthology/teuthology/task/internal/syslog.py", line 175, in syslog run.wait( File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/kshtsk/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm03 with status 123: "find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose --" 2026-03-31T20:15:04.915 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-31T20:15:04.917 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-31T20:15:04.917 DEBUG:teuthology.orchestra.run.vm01:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-31T20:15:04.921 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-31T20:15:04.922 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-31T20:15:04.922 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: gzip 2026-03-31T20:15:04.922 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /home/ubuntu/cephtest/archive/syslog/kern.log.gz: No space left on device 2026-03-31T20:15:04.922 INFO:teuthology.orchestra.run.vm06.stderr: -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-31T20:15:04.922 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 2026-03-31T20:15:04.922 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /home/ubuntu/cephtest/archive/syslog/misc.log.gz: No space left on device 2026-03-31T20:15:04.923 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 2026-03-31T20:15:04.923 INFO:teuthology.orchestra.run.vm06.stderr:gzip: /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz: No space left on device 2026-03-31T20:15:04.944 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-31T20:15:04.962 DEBUG:teuthology.orchestra.run.vm05:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-31T20:15:04.971 DEBUG:teuthology.orchestra.run.vm06:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-31T20:15:04.979 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-31T20:15:04.980 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-31T20:15:04.983 DEBUG:teuthology.orchestra.run.vm01:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-31T20:15:04.984 DEBUG:teuthology.orchestra.run.vm03:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-31T20:15:04.989 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = core 2026-03-31T20:15:05.006 DEBUG:teuthology.orchestra.run.vm05:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-31T20:15:05.014 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = core 2026-03-31T20:15:05.014 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-31T20:15:05.016 DEBUG:teuthology.orchestra.run.vm06:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-31T20:15:05.023 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = core 2026-03-31T20:15:05.029 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = core 2026-03-31T20:15:05.038 DEBUG:teuthology.orchestra.run.vm01:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-31T20:15:05.041 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T20:15:05.041 DEBUG:teuthology.orchestra.run.vm03:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-31T20:15:05.068 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T20:15:05.068 DEBUG:teuthology.orchestra.run.vm05:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-31T20:15:05.078 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T20:15:05.078 DEBUG:teuthology.orchestra.run.vm06:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-31T20:15:05.084 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-31T20:15:05.084 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-31T20:15:05.086 INFO:teuthology.task.internal:Transferring archived files... 2026-03-31T20:15:05.087 DEBUG:teuthology.misc:Transferring archived files from vm01:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340/remote/vm01 2026-03-31T20:15:05.087 DEBUG:teuthology.orchestra.run.vm01:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-31T20:15:05.089 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 123817 files and directories currently installed.) 2026-03-31T20:15:05.091 INFO:teuthology.orchestra.run.vm01.stdout:Purging configuration files for radosgw (20.2.0-721-g5bb32787-1jammy) ... 2026-03-31T20:15:05.095 DEBUG:teuthology.misc:Transferring archived files from vm03:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340/remote/vm03 2026-03-31T20:15:05.095 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-31T20:15:05.117 DEBUG:teuthology.misc:Transferring archived files from vm05:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340/remote/vm05 2026-03-31T20:15:05.117 DEBUG:teuthology.orchestra.run.vm05:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-31T20:15:05.126 DEBUG:teuthology.misc:Transferring archived files from vm06:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-31_11:18:10-rados-tentacle-none-default-vps/4340/remote/vm06 2026-03-31T20:15:05.126 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-31T20:15:05.133 INFO:teuthology.task.internal:Removing archive directory... 2026-03-31T20:15:05.133 DEBUG:teuthology.orchestra.run.vm01:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-31T20:15:05.138 DEBUG:teuthology.orchestra.run.vm03:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-31T20:15:05.158 DEBUG:teuthology.orchestra.run.vm05:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-31T20:15:05.168 DEBUG:teuthology.orchestra.run.vm06:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-31T20:15:05.176 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-31T20:15:05.179 INFO:teuthology.task.internal:Not uploading archives. 2026-03-31T20:15:05.179 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-31T20:15:05.181 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-31T20:15:05.181 DEBUG:teuthology.orchestra.run.vm01:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-31T20:15:05.182 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-31T20:15:05.184 INFO:teuthology.orchestra.run.vm01.stdout: 258075 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 31 20:15 /home/ubuntu/cephtest 2026-03-31T20:15:05.202 DEBUG:teuthology.orchestra.run.vm05:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-31T20:15:05.205 INFO:teuthology.orchestra.run.vm03.stdout: 258068 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 31 20:15 /home/ubuntu/cephtest 2026-03-31T20:15:05.212 DEBUG:teuthology.orchestra.run.vm06:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-31T20:15:05.214 INFO:teuthology.orchestra.run.vm05.stdout: 258076 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 31 20:15 /home/ubuntu/cephtest 2026-03-31T20:15:05.220 INFO:teuthology.orchestra.run.vm06.stdout: 258076 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 31 20:15 /home/ubuntu/cephtest 2026-03-31T20:15:05.221 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-31T20:15:05.224 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CommandFailedError: Command failed on vm03 with status 123: "time sudo find /var/log/ceph -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose --" 2026-03-31T20:15:05.224 INFO:teuthology.run:Summary data: description: rados/singleton/{all/ec-esb-fio mon_election/classic msgr-failures/few msgr/async-v1only objectstore/{bluestore/{alloc$/{avl} base mem$/{normal-1} onode-segment$/{512K} write$/{random/{compr$/{no$/{no}} random}}}} rados supported-random-distro$/{ubuntu_latest}} duration: 4033.993719816208 failure_reason: 'Command failed on vm03 with status 123: "time sudo find /var/log/ceph -name ''*.log'' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose --"' flavor: default owner: kyr status: fail success: false 2026-03-31T20:15:05.225 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-31T20:15:05.246 INFO:teuthology.run:FAIL