2026-03-21T14:40:14.725 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-21T14:40:14.730 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-21T14:40:14.766 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-20_22:04:26-rbd-tentacle-none-default-vps/3489 branch: tentacle description: rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2} 3-supported-random-distro$/{centos_latest} 4-cache-path 5-cache-mode/rwl 6-cache-size/1G 7-workloads/qemu_xfstests conf/{disable-pool-app}} email: null first_in_suite: false flavor: default job_id: '3489' last_in_suite: false machine_type: vps name: kyr-2026-03-20_22:04:26-rbd-tentacle-none-default-vps no_nested_subset: false os_type: centos os_version: 9.stream overrides: admin_socket: branch: tentacle ansible.cephlab: branch: main repo: https://github.com/kshtsk/ceph-cm-ansible.git skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: logical_volumes: lv_1: scratch_dev: true size: 25%VG vg: vg_nvme lv_2: scratch_dev: true size: 25%VG vg: vg_nvme lv_3: scratch_dev: true size: 25%VG vg: vg_nvme lv_4: scratch_dev: true size: 25%VG vg: vg_nvme timezone: UTC volume_groups: vg_nvme: pvs: /dev/vdb,/dev/vdc,/dev/vdd,/dev/vde ceph: conf: client: rbd_persistent_cache_mode: rwl rbd_persistent_cache_path: /home/ubuntu/cephtest/rbd-pwl-cache rbd_persistent_cache_size: 1073741824 rbd_plugins: pwl_cache global: mon warn on pool no app: false mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) sha1: 70f8415b300f041766fa27faf7d5472699e32388 ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_binary_url: https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm install: ceph: flavor: default sha1: 70f8415b300f041766fa27faf7d5472699e32388 extra_system_packages: deb: - python3-jmespath - python3-xmltodict - s3cmd rpm: - bzip2 - perl-Test-Harness - python3-jmespath - python3-xmltodict - s3cmd workunit: branch: tt-tentacle sha1: 0392f78529848ec72469e8e431875cb98d3a5fb4 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mgr.x - osd.0 - osd.1 - - mon.b - mgr.y - osd.2 - osd.3 - client.0 seed: 3051 sha1: 70f8415b300f041766fa27faf7d5472699e32388 sleep_before_teardown: 0 subset: 1/128 suite: rbd suite_branch: tt-tentacle suite_path: /home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 0392f78529848ec72469e8e431875cb98d3a5fb4 targets: vm01.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNIpSOAdY3T/6haAG7o4rDiTr6BfJep0HvSksZFOuR7MI7ZX0rp3SzA5gfwanXw34+aFwPB6p6/tRK3WSG1ovFI= vm05.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEjUrV+jd2i3MWkce3otNYg7MpL/Pjsf6jQNdtK3cafD2PjuVy4AubknZDhcbgsCrw92RlW1qrWhKP65TZno3LE= tasks: - install: null - ceph: null - exec: client.0: - mkdir /home/ubuntu/cephtest/tmpfs - mkdir /home/ubuntu/cephtest/rbd-pwl-cache - sudo mount -t tmpfs -o size=20G tmpfs /home/ubuntu/cephtest/tmpfs - truncate -s 20G /home/ubuntu/cephtest/tmpfs/loopfile - mkfs.ext4 /home/ubuntu/cephtest/tmpfs/loopfile - sudo mount -o loop /home/ubuntu/cephtest/tmpfs/loopfile /home/ubuntu/cephtest/rbd-pwl-cache - sudo chmod 777 /home/ubuntu/cephtest/rbd-pwl-cache - exec_on_cleanup: client.0: - sudo umount /home/ubuntu/cephtest/rbd-pwl-cache - sudo umount /home/ubuntu/cephtest/tmpfs - rm -rf /home/ubuntu/cephtest/rbd-pwl-cache - rm -rf /home/ubuntu/cephtest/tmpfs - qemu: client.0: cpus: 4 disks: 3 memory: 4096 test: qa/run_xfstests_qemu.sh type: block teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-20_22:04:26 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.4188345 2026-03-21T14:40:14.766 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa; will attempt to use it 2026-03-21T14:40:14.767 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa/tasks 2026-03-21T14:40:14.767 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-21T14:40:14.768 INFO:teuthology.task.internal:Checking packages... 2026-03-21T14:40:14.768 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash '70f8415b300f041766fa27faf7d5472699e32388' 2026-03-21T14:40:14.768 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-21T14:40:14.768 INFO:teuthology.packaging:ref: None 2026-03-21T14:40:14.768 INFO:teuthology.packaging:tag: None 2026-03-21T14:40:14.768 INFO:teuthology.packaging:branch: tentacle 2026-03-21T14:40:14.768 INFO:teuthology.packaging:sha1: 70f8415b300f041766fa27faf7d5472699e32388 2026-03-21T14:40:14.768 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=tentacle 2026-03-21T14:40:15.585 INFO:teuthology.task.internal:Found packages for ceph version 20.2.0-721.g5bb32787 2026-03-21T14:40:15.586 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-21T14:40:15.587 INFO:teuthology.task.internal:no buildpackages task found 2026-03-21T14:40:15.587 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-21T14:40:15.589 INFO:teuthology.task.internal:Saving configuration 2026-03-21T14:40:15.593 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-21T14:40:15.596 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-21T14:40:15.603 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm01.local', 'description': '/archive/kyr-2026-03-20_22:04:26-rbd-tentacle-none-default-vps/3489', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-21 14:38:48.729546', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:01', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNIpSOAdY3T/6haAG7o4rDiTr6BfJep0HvSksZFOuR7MI7ZX0rp3SzA5gfwanXw34+aFwPB6p6/tRK3WSG1ovFI='} 2026-03-21T14:40:15.608 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm05.local', 'description': '/archive/kyr-2026-03-20_22:04:26-rbd-tentacle-none-default-vps/3489', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-21 14:38:48.730699', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:05', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEjUrV+jd2i3MWkce3otNYg7MpL/Pjsf6jQNdtK3cafD2PjuVy4AubknZDhcbgsCrw92RlW1qrWhKP65TZno3LE='} 2026-03-21T14:40:15.608 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-21T14:40:15.610 INFO:teuthology.task.internal:roles: ubuntu@vm01.local - ['mon.a', 'mgr.x', 'osd.0', 'osd.1'] 2026-03-21T14:40:15.610 INFO:teuthology.task.internal:roles: ubuntu@vm05.local - ['mon.b', 'mgr.y', 'osd.2', 'osd.3', 'client.0'] 2026-03-21T14:40:15.610 INFO:teuthology.run_tasks:Running task console_log... 2026-03-21T14:40:15.624 DEBUG:teuthology.task.console_log:vm01 does not support IPMI; excluding 2026-03-21T14:40:15.631 DEBUG:teuthology.task.console_log:vm05 does not support IPMI; excluding 2026-03-21T14:40:15.631 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f2a38000790>, signals=[15]) 2026-03-21T14:40:15.631 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-21T14:40:15.632 INFO:teuthology.task.internal:Opening connections... 2026-03-21T14:40:15.632 DEBUG:teuthology.task.internal:connecting to ubuntu@vm01.local 2026-03-21T14:40:15.633 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-21T14:40:15.693 DEBUG:teuthology.task.internal:connecting to ubuntu@vm05.local 2026-03-21T14:40:15.693 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-21T14:40:15.753 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-21T14:40:15.755 DEBUG:teuthology.orchestra.run.vm01:> uname -m 2026-03-21T14:40:15.799 INFO:teuthology.orchestra.run.vm01.stdout:x86_64 2026-03-21T14:40:15.799 DEBUG:teuthology.orchestra.run.vm01:> cat /etc/os-release 2026-03-21T14:40:15.859 INFO:teuthology.orchestra.run.vm01.stdout:NAME="CentOS Stream" 2026-03-21T14:40:15.859 INFO:teuthology.orchestra.run.vm01.stdout:VERSION="9" 2026-03-21T14:40:15.859 INFO:teuthology.orchestra.run.vm01.stdout:ID="centos" 2026-03-21T14:40:15.859 INFO:teuthology.orchestra.run.vm01.stdout:ID_LIKE="rhel fedora" 2026-03-21T14:40:15.860 INFO:teuthology.orchestra.run.vm01.stdout:VERSION_ID="9" 2026-03-21T14:40:15.860 INFO:teuthology.orchestra.run.vm01.stdout:PLATFORM_ID="platform:el9" 2026-03-21T14:40:15.860 INFO:teuthology.orchestra.run.vm01.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-21T14:40:15.860 INFO:teuthology.orchestra.run.vm01.stdout:ANSI_COLOR="0;31" 2026-03-21T14:40:15.860 INFO:teuthology.orchestra.run.vm01.stdout:LOGO="fedora-logo-icon" 2026-03-21T14:40:15.860 INFO:teuthology.orchestra.run.vm01.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-21T14:40:15.860 INFO:teuthology.orchestra.run.vm01.stdout:HOME_URL="https://centos.org/" 2026-03-21T14:40:15.860 INFO:teuthology.orchestra.run.vm01.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-21T14:40:15.860 INFO:teuthology.orchestra.run.vm01.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-21T14:40:15.860 INFO:teuthology.orchestra.run.vm01.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-21T14:40:15.860 INFO:teuthology.lock.ops:Updating vm01.local on lock server 2026-03-21T14:40:15.867 DEBUG:teuthology.orchestra.run.vm05:> uname -m 2026-03-21T14:40:15.886 INFO:teuthology.orchestra.run.vm05.stdout:x86_64 2026-03-21T14:40:15.886 DEBUG:teuthology.orchestra.run.vm05:> cat /etc/os-release 2026-03-21T14:40:15.943 INFO:teuthology.orchestra.run.vm05.stdout:NAME="CentOS Stream" 2026-03-21T14:40:15.944 INFO:teuthology.orchestra.run.vm05.stdout:VERSION="9" 2026-03-21T14:40:15.944 INFO:teuthology.orchestra.run.vm05.stdout:ID="centos" 2026-03-21T14:40:15.944 INFO:teuthology.orchestra.run.vm05.stdout:ID_LIKE="rhel fedora" 2026-03-21T14:40:15.944 INFO:teuthology.orchestra.run.vm05.stdout:VERSION_ID="9" 2026-03-21T14:40:15.944 INFO:teuthology.orchestra.run.vm05.stdout:PLATFORM_ID="platform:el9" 2026-03-21T14:40:15.944 INFO:teuthology.orchestra.run.vm05.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-21T14:40:15.944 INFO:teuthology.orchestra.run.vm05.stdout:ANSI_COLOR="0;31" 2026-03-21T14:40:15.944 INFO:teuthology.orchestra.run.vm05.stdout:LOGO="fedora-logo-icon" 2026-03-21T14:40:15.944 INFO:teuthology.orchestra.run.vm05.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-21T14:40:15.944 INFO:teuthology.orchestra.run.vm05.stdout:HOME_URL="https://centos.org/" 2026-03-21T14:40:15.944 INFO:teuthology.orchestra.run.vm05.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-21T14:40:15.944 INFO:teuthology.orchestra.run.vm05.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-21T14:40:15.944 INFO:teuthology.orchestra.run.vm05.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-21T14:40:15.944 INFO:teuthology.lock.ops:Updating vm05.local on lock server 2026-03-21T14:40:15.953 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-21T14:40:15.958 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-21T14:40:15.959 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-21T14:40:15.959 DEBUG:teuthology.orchestra.run.vm01:> test '!' -e /home/ubuntu/cephtest 2026-03-21T14:40:15.962 DEBUG:teuthology.orchestra.run.vm05:> test '!' -e /home/ubuntu/cephtest 2026-03-21T14:40:16.000 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-21T14:40:16.001 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-21T14:40:16.002 DEBUG:teuthology.orchestra.run.vm01:> test -z $(ls -A /var/lib/ceph) 2026-03-21T14:40:16.020 DEBUG:teuthology.orchestra.run.vm05:> test -z $(ls -A /var/lib/ceph) 2026-03-21T14:40:16.037 INFO:teuthology.orchestra.run.vm01.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-21T14:40:16.056 INFO:teuthology.orchestra.run.vm05.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-21T14:40:16.056 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-21T14:40:16.066 DEBUG:teuthology.orchestra.run.vm01:> test -e /ceph-qa-ready 2026-03-21T14:40:16.095 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-21T14:40:16.306 DEBUG:teuthology.orchestra.run.vm05:> test -e /ceph-qa-ready 2026-03-21T14:40:16.322 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-21T14:40:16.520 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-21T14:40:16.522 INFO:teuthology.task.internal:Creating test directory... 2026-03-21T14:40:16.522 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-21T14:40:16.525 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-21T14:40:16.542 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-21T14:40:16.544 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-21T14:40:16.546 INFO:teuthology.task.internal:Creating archive directory... 2026-03-21T14:40:16.546 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-21T14:40:16.583 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-21T14:40:16.603 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-21T14:40:16.604 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-21T14:40:16.605 DEBUG:teuthology.orchestra.run.vm01:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-21T14:40:16.655 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-21T14:40:16.655 DEBUG:teuthology.orchestra.run.vm05:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-21T14:40:16.673 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-21T14:40:16.673 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-21T14:40:16.698 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-21T14:40:16.728 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-21T14:40:16.736 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-21T14:40:16.742 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-21T14:40:16.754 INFO:teuthology.orchestra.run.vm05.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-21T14:40:16.756 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-21T14:40:16.759 INFO:teuthology.task.internal:Configuring sudo... 2026-03-21T14:40:16.759 DEBUG:teuthology.orchestra.run.vm01:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-21T14:40:16.780 DEBUG:teuthology.orchestra.run.vm05:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-21T14:40:16.823 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-21T14:40:16.826 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-21T14:40:16.826 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-21T14:40:16.846 DEBUG:teuthology.orchestra.run.vm05:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-21T14:40:16.882 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-21T14:40:16.925 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-21T14:40:16.986 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:40:16.986 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-21T14:40:17.049 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-21T14:40:17.077 DEBUG:teuthology.orchestra.run.vm05:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-21T14:40:17.134 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:40:17.134 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-21T14:40:17.194 DEBUG:teuthology.orchestra.run.vm01:> sudo service rsyslog restart 2026-03-21T14:40:17.196 DEBUG:teuthology.orchestra.run.vm05:> sudo service rsyslog restart 2026-03-21T14:40:17.222 INFO:teuthology.orchestra.run.vm01.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-21T14:40:17.269 INFO:teuthology.orchestra.run.vm05.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-21T14:40:17.653 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-21T14:40:17.655 INFO:teuthology.task.internal:Starting timer... 2026-03-21T14:40:17.655 INFO:teuthology.run_tasks:Running task pcp... 2026-03-21T14:40:17.659 INFO:teuthology.run_tasks:Running task selinux... 2026-03-21T14:40:17.662 INFO:teuthology.task.selinux:Excluding vm01: VMs are not yet supported 2026-03-21T14:40:17.662 INFO:teuthology.task.selinux:Excluding vm05: VMs are not yet supported 2026-03-21T14:40:17.662 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-21T14:40:17.662 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-21T14:40:17.662 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-21T14:40:17.662 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-21T14:40:17.664 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}} 2026-03-21T14:40:17.664 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/kshtsk/ceph-cm-ansible.git 2026-03-21T14:40:17.666 INFO:teuthology.repo_utils:Fetching github.com_kshtsk_ceph-cm-ansible_main from origin 2026-03-21T14:40:18.354 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main to origin/main 2026-03-21T14:40:18.361 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-21T14:40:18.362 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "logical_volumes": {"lv_1": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_2": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_3": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_4": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}}, "timezone": "UTC", "volume_groups": {"vg_nvme": {"pvs": "/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde"}}}' -i /tmp/teuth_ansible_inventoryxxw1fm99 --limit vm01.local,vm05.local /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-21T14:41:54.590 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm01.local'), Remote(name='ubuntu@vm05.local')] 2026-03-21T14:41:54.590 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm01.local' 2026-03-21T14:41:54.590 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-21T14:41:54.654 DEBUG:teuthology.orchestra.run.vm01:> true 2026-03-21T14:41:54.725 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm01.local' 2026-03-21T14:41:54.726 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm05.local' 2026-03-21T14:41:54.726 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm05.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-21T14:41:54.789 DEBUG:teuthology.orchestra.run.vm05:> true 2026-03-21T14:41:54.867 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm05.local' 2026-03-21T14:41:54.867 INFO:teuthology.run_tasks:Running task clock... 2026-03-21T14:41:54.870 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-21T14:41:54.870 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-21T14:41:54.870 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-21T14:41:54.871 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-21T14:41:54.872 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-21T14:41:54.906 INFO:teuthology.orchestra.run.vm01.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-21T14:41:54.920 INFO:teuthology.orchestra.run.vm01.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-21T14:41:54.937 INFO:teuthology.orchestra.run.vm05.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-21T14:41:54.945 INFO:teuthology.orchestra.run.vm01.stderr:sudo: ntpd: command not found 2026-03-21T14:41:54.951 INFO:teuthology.orchestra.run.vm05.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-21T14:41:54.957 INFO:teuthology.orchestra.run.vm01.stdout:506 Cannot talk to daemon 2026-03-21T14:41:54.972 INFO:teuthology.orchestra.run.vm01.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-21T14:41:54.982 INFO:teuthology.orchestra.run.vm05.stderr:sudo: ntpd: command not found 2026-03-21T14:41:54.986 INFO:teuthology.orchestra.run.vm01.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-21T14:41:54.994 INFO:teuthology.orchestra.run.vm05.stdout:506 Cannot talk to daemon 2026-03-21T14:41:55.009 INFO:teuthology.orchestra.run.vm05.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-21T14:41:55.022 INFO:teuthology.orchestra.run.vm05.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-21T14:41:55.038 INFO:teuthology.orchestra.run.vm01.stderr:bash: line 1: ntpq: command not found 2026-03-21T14:41:55.040 INFO:teuthology.orchestra.run.vm01.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-21T14:41:55.040 INFO:teuthology.orchestra.run.vm01.stdout:=============================================================================== 2026-03-21T14:41:55.074 INFO:teuthology.orchestra.run.vm05.stderr:bash: line 1: ntpq: command not found 2026-03-21T14:41:55.077 INFO:teuthology.orchestra.run.vm05.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-21T14:41:55.077 INFO:teuthology.orchestra.run.vm05.stdout:=============================================================================== 2026-03-21T14:41:55.077 INFO:teuthology.run_tasks:Running task install... 2026-03-21T14:41:55.079 DEBUG:teuthology.task.install:project ceph 2026-03-21T14:41:55.079 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-03-21T14:41:55.079 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-03-21T14:41:55.079 INFO:teuthology.task.install:Using flavor: default 2026-03-21T14:41:55.082 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-21T14:41:55.082 INFO:teuthology.task.install:extra packages: [] 2026-03-21T14:41:55.082 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'tag': None, 'wait_for_package': False} 2026-03-21T14:41:55.082 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-21T14:41:55.083 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'tag': None, 'wait_for_package': False} 2026-03-21T14:41:55.083 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-21T14:41:55.748 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/ 2026-03-21T14:41:55.748 INFO:teuthology.task.install.rpm:Package version is 20.2.0-712.g70f8415b 2026-03-21T14:41:55.783 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/ 2026-03-21T14:41:55.783 INFO:teuthology.task.install.rpm:Package version is 20.2.0-712.g70f8415b 2026-03-21T14:41:56.258 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-21T14:41:56.258 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:41:56.258 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-21T14:41:56.288 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-jmespath, python3-xmltodict, s3cmd on remote rpm x86_64 2026-03-21T14:41:56.288 DEBUG:teuthology.orchestra.run.vm05:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/70f8415b300f041766fa27faf7d5472699e32388/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-21T14:41:56.297 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-21T14:41:56.297 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:41:56.297 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-21T14:41:56.326 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-jmespath, python3-xmltodict, s3cmd on remote rpm x86_64 2026-03-21T14:41:56.327 DEBUG:teuthology.orchestra.run.vm01:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/70f8415b300f041766fa27faf7d5472699e32388/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-21T14:41:56.361 DEBUG:teuthology.orchestra.run.vm05:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-21T14:41:56.395 DEBUG:teuthology.orchestra.run.vm01:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-21T14:41:56.449 DEBUG:teuthology.orchestra.run.vm05:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-21T14:41:56.480 DEBUG:teuthology.orchestra.run.vm01:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-21T14:41:56.480 INFO:teuthology.orchestra.run.vm05.stdout:check_obsoletes = 1 2026-03-21T14:41:56.481 DEBUG:teuthology.orchestra.run.vm05:> sudo yum clean all 2026-03-21T14:41:56.548 INFO:teuthology.orchestra.run.vm01.stdout:check_obsoletes = 1 2026-03-21T14:41:56.550 DEBUG:teuthology.orchestra.run.vm01:> sudo yum clean all 2026-03-21T14:41:56.656 INFO:teuthology.orchestra.run.vm05.stdout:41 files removed 2026-03-21T14:41:56.677 DEBUG:teuthology.orchestra.run.vm05:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-jmespath python3-xmltodict s3cmd 2026-03-21T14:41:56.732 INFO:teuthology.orchestra.run.vm01.stdout:41 files removed 2026-03-21T14:41:56.751 DEBUG:teuthology.orchestra.run.vm01:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-jmespath python3-xmltodict s3cmd 2026-03-21T14:41:58.031 INFO:teuthology.orchestra.run.vm01.stdout:ceph packages for x86_64 79 kB/s | 87 kB 00:01 2026-03-21T14:41:58.070 INFO:teuthology.orchestra.run.vm05.stdout:ceph packages for x86_64 72 kB/s | 87 kB 00:01 2026-03-21T14:41:59.107 INFO:teuthology.orchestra.run.vm01.stdout:ceph noarch packages 17 kB/s | 18 kB 00:01 2026-03-21T14:41:59.160 INFO:teuthology.orchestra.run.vm05.stdout:ceph noarch packages 17 kB/s | 18 kB 00:01 2026-03-21T14:42:00.086 INFO:teuthology.orchestra.run.vm01.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-21T14:42:00.139 INFO:teuthology.orchestra.run.vm05.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-21T14:42:01.064 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - BaseOS 9.8 MB/s | 8.9 MB 00:00 2026-03-21T14:42:01.845 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - BaseOS 5.1 MB/s | 8.9 MB 00:01 2026-03-21T14:42:02.479 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - AppStream 37 MB/s | 27 MB 00:00 2026-03-21T14:42:04.827 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - AppStream 12 MB/s | 27 MB 00:02 2026-03-21T14:42:09.009 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - CRB 5.7 MB/s | 8.0 MB 00:01 2026-03-21T14:42:10.332 INFO:teuthology.orchestra.run.vm01.stdout:CentOS Stream 9 - Extras packages 43 kB/s | 20 kB 00:00 2026-03-21T14:42:11.130 INFO:teuthology.orchestra.run.vm01.stdout:Extra Packages for Enterprise Linux 29 MB/s | 20 MB 00:00 2026-03-21T14:42:13.254 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - CRB 1.0 MB/s | 8.0 MB 00:08 2026-03-21T14:42:14.477 INFO:teuthology.orchestra.run.vm05.stdout:CentOS Stream 9 - Extras packages 57 kB/s | 20 kB 00:00 2026-03-21T14:42:15.345 INFO:teuthology.orchestra.run.vm05.stdout:Extra Packages for Enterprise Linux 26 MB/s | 20 MB 00:00 2026-03-21T14:42:15.922 INFO:teuthology.orchestra.run.vm01.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-21T14:42:17.345 INFO:teuthology.orchestra.run.vm01.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-21T14:42:17.346 INFO:teuthology.orchestra.run.vm01.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-21T14:42:17.379 INFO:teuthology.orchestra.run.vm01.stdout:Dependencies resolved. 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout:====================================================================================== 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout: Package Arch Version Repository Size 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout:====================================================================================== 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout:Installing: 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout: bzip2 x86_64 1.0.8-11.el9 baseos 55 k 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout: ceph x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.5 k 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.9 M 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 939 k 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-712.g70f8415b.el9 ceph 154 k 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr x86_64 2:20.2.0-712.g70f8415b.el9 ceph 962 k 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 173 k 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 11 M 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 7.4 M 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 50 k 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-21T14:42:17.383 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test x86_64 2:20.2.0-712.g70f8415b.el9 ceph 84 M 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 298 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 1.0 M 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 34 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 866 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 126 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: perl-Test-Harness noarch 1:3.42-461.el9 appstream 295 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs x86_64 2:20.2.0-712.g70f8415b.el9 ceph 163 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados x86_64 2:20.2.0-712.g70f8415b.el9 ceph 324 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 304 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 99 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 91 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.9 M 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 180 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: s3cmd noarch 2.4.0-1.el9 epel 206 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout:Upgrading: 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: librados2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 3.5 M 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: librbd1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.8 M 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout:Installing dependencies: 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 43 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.3 M 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 290 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.0 M 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 17 M 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 17 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux x86_64 2:20.2.0-712.g70f8415b.el9 ceph 25 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: fuse x86_64 2.9.9-17.el9 baseos 80 k 2026-03-21T14:42:17.384 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-proxy2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite x86_64 2:20.2.0-712.g70f8415b.el9 ceph 164 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 250 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: librgw2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.4 M 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: perl-Benchmark noarch 1.23-483.el9 appstream 26 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-21T14:42:17.385 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 45 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 175 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-21T14:42:17.386 INFO:teuthology.orchestra.run.vm01.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-21T14:42:17.387 INFO:teuthology.orchestra.run.vm01.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-21T14:42:17.387 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-21T14:42:17.387 INFO:teuthology.orchestra.run.vm01.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-21T14:42:17.387 INFO:teuthology.orchestra.run.vm01.stdout:Installing weak dependencies: 2026-03-21T14:42:17.387 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-21T14:42:17.387 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:17.387 INFO:teuthology.orchestra.run.vm01.stdout:Transaction Summary 2026-03-21T14:42:17.387 INFO:teuthology.orchestra.run.vm01.stdout:====================================================================================== 2026-03-21T14:42:17.387 INFO:teuthology.orchestra.run.vm01.stdout:Install 136 Packages 2026-03-21T14:42:17.387 INFO:teuthology.orchestra.run.vm01.stdout:Upgrade 2 Packages 2026-03-21T14:42:17.387 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:17.387 INFO:teuthology.orchestra.run.vm01.stdout:Total download size: 267 M 2026-03-21T14:42:17.387 INFO:teuthology.orchestra.run.vm01.stdout:Downloading Packages: 2026-03-21T14:42:18.748 INFO:teuthology.orchestra.run.vm01.stdout:(1/138): ceph-20.2.0-712.g70f8415b.el9.x86_64.r 14 kB/s | 6.5 kB 00:00 2026-03-21T14:42:19.587 INFO:teuthology.orchestra.run.vm01.stdout:(2/138): ceph-fuse-20.2.0-712.g70f8415b.el9.x86 1.1 MB/s | 939 kB 00:00 2026-03-21T14:42:19.707 INFO:teuthology.orchestra.run.vm01.stdout:(3/138): ceph-immutable-object-cache-20.2.0-712 1.3 MB/s | 154 kB 00:00 2026-03-21T14:42:19.801 INFO:teuthology.orchestra.run.vm01.stdout:(4/138): ceph-base-20.2.0-712.g70f8415b.el9.x86 3.8 MB/s | 5.9 MB 00:01 2026-03-21T14:42:19.924 INFO:teuthology.orchestra.run.vm05.stdout:lab-extras 64 kB/s | 50 kB 00:00 2026-03-21T14:42:19.947 INFO:teuthology.orchestra.run.vm01.stdout:(5/138): ceph-mgr-20.2.0-712.g70f8415b.el9.x86_ 6.4 MB/s | 962 kB 00:00 2026-03-21T14:42:19.990 INFO:teuthology.orchestra.run.vm01.stdout:(6/138): ceph-mds-20.2.0-712.g70f8415b.el9.x86_ 8.3 MB/s | 2.3 MB 00:00 2026-03-21T14:42:20.331 INFO:teuthology.orchestra.run.vm01.stdout:(7/138): ceph-mon-20.2.0-712.g70f8415b.el9.x86_ 13 MB/s | 5.0 MB 00:00 2026-03-21T14:42:20.947 INFO:teuthology.orchestra.run.vm01.stdout:(8/138): ceph-common-20.2.0-712.g70f8415b.el9.x 8.9 MB/s | 24 MB 00:02 2026-03-21T14:42:21.064 INFO:teuthology.orchestra.run.vm01.stdout:(9/138): ceph-selinux-20.2.0-712.g70f8415b.el9. 214 kB/s | 25 kB 00:00 2026-03-21T14:42:21.145 INFO:teuthology.orchestra.run.vm01.stdout:(10/138): ceph-osd-20.2.0-712.g70f8415b.el9.x86 15 MB/s | 17 MB 00:01 2026-03-21T14:42:21.264 INFO:teuthology.orchestra.run.vm01.stdout:(11/138): libcephfs-devel-20.2.0-712.g70f8415b. 290 kB/s | 34 kB 00:00 2026-03-21T14:42:21.373 INFO:teuthology.orchestra.run.vm05.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-21T14:42:21.374 INFO:teuthology.orchestra.run.vm05.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-21T14:42:21.383 INFO:teuthology.orchestra.run.vm01.stdout:(12/138): libcephfs-proxy2-20.2.0-712.g70f8415b 204 kB/s | 24 kB 00:00 2026-03-21T14:42:21.409 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout:====================================================================================== 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: Package Arch Version Repository Size 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout:====================================================================================== 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout:Installing: 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: bzip2 x86_64 1.0.8-11.el9 baseos 55 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.5 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.9 M 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 939 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-712.g70f8415b.el9 ceph 154 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr x86_64 2:20.2.0-712.g70f8415b.el9 ceph 962 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 173 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 11 M 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 7.4 M 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 50 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test x86_64 2:20.2.0-712.g70f8415b.el9 ceph 84 M 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 298 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 1.0 M 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 34 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 866 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 126 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: perl-Test-Harness noarch 1:3.42-461.el9 appstream 295 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs x86_64 2:20.2.0-712.g70f8415b.el9 ceph 163 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados x86_64 2:20.2.0-712.g70f8415b.el9 ceph 324 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 304 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 99 k 2026-03-21T14:42:21.413 INFO:teuthology.orchestra.run.vm05.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 91 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.9 M 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 180 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: s3cmd noarch 2.4.0-1.el9 epel 206 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout:Upgrading: 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: librados2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 3.5 M 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: librbd1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.8 M 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout:Installing dependencies: 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 43 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.3 M 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 290 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.0 M 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 17 M 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 17 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux x86_64 2:20.2.0-712.g70f8415b.el9 ceph 25 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: fuse x86_64 2.9.9-17.el9 baseos 80 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-proxy2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite x86_64 2:20.2.0-712.g70f8415b.el9 ceph 164 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 250 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: librgw2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.4 M 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: perl-Benchmark noarch 1.23-483.el9 appstream 26 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-21T14:42:21.414 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 45 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 175 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout:Installing weak dependencies: 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout:Transaction Summary 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout:====================================================================================== 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout:Install 136 Packages 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout:Upgrade 2 Packages 2026-03-21T14:42:21.415 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:21.416 INFO:teuthology.orchestra.run.vm05.stdout:Total download size: 267 M 2026-03-21T14:42:21.416 INFO:teuthology.orchestra.run.vm05.stdout:Downloading Packages: 2026-03-21T14:42:21.513 INFO:teuthology.orchestra.run.vm01.stdout:(13/138): libcephfs2-20.2.0-712.g70f8415b.el9.x 6.5 MB/s | 866 kB 00:00 2026-03-21T14:42:21.635 INFO:teuthology.orchestra.run.vm01.stdout:(14/138): libcephsqlite-20.2.0-712.g70f8415b.el 1.3 MB/s | 164 kB 00:00 2026-03-21T14:42:21.757 INFO:teuthology.orchestra.run.vm01.stdout:(15/138): librados-devel-20.2.0-712.g70f8415b.e 1.0 MB/s | 126 kB 00:00 2026-03-21T14:42:21.879 INFO:teuthology.orchestra.run.vm01.stdout:(16/138): libradosstriper1-20.2.0-712.g70f8415b 2.0 MB/s | 250 kB 00:00 2026-03-21T14:42:22.381 INFO:teuthology.orchestra.run.vm01.stdout:(17/138): librgw2-20.2.0-712.g70f8415b.el9.x86_ 13 MB/s | 6.4 MB 00:00 2026-03-21T14:42:22.503 INFO:teuthology.orchestra.run.vm01.stdout:(18/138): python3-ceph-argparse-20.2.0-712.g70f 372 kB/s | 45 kB 00:00 2026-03-21T14:42:22.624 INFO:teuthology.orchestra.run.vm01.stdout:(19/138): python3-ceph-common-20.2.0-712.g70f84 1.4 MB/s | 175 kB 00:00 2026-03-21T14:42:22.739 INFO:teuthology.orchestra.run.vm01.stdout:(20/138): ceph-radosgw-20.2.0-712.g70f8415b.el9 9.8 MB/s | 24 MB 00:02 2026-03-21T14:42:22.746 INFO:teuthology.orchestra.run.vm01.stdout:(21/138): python3-cephfs-20.2.0-712.g70f8415b.e 1.3 MB/s | 163 kB 00:00 2026-03-21T14:42:22.870 INFO:teuthology.orchestra.run.vm01.stdout:(22/138): python3-rados-20.2.0-712.g70f8415b.el 2.4 MB/s | 324 kB 00:00 2026-03-21T14:42:22.885 INFO:teuthology.orchestra.run.vm01.stdout:(23/138): python3-rbd-20.2.0-712.g70f8415b.el9. 2.1 MB/s | 304 kB 00:00 2026-03-21T14:42:22.989 INFO:teuthology.orchestra.run.vm01.stdout:(24/138): python3-rgw-20.2.0-712.g70f8415b.el9. 832 kB/s | 99 kB 00:00 2026-03-21T14:42:23.005 INFO:teuthology.orchestra.run.vm01.stdout:(25/138): rbd-fuse-20.2.0-712.g70f8415b.el9.x86 762 kB/s | 91 kB 00:00 2026-03-21T14:42:23.125 INFO:teuthology.orchestra.run.vm05.stdout:(1/138): ceph-20.2.0-712.g70f8415b.el9.x86_64.r 13 kB/s | 6.5 kB 00:00 2026-03-21T14:42:23.128 INFO:teuthology.orchestra.run.vm01.stdout:(26/138): rbd-nbd-20.2.0-712.g70f8415b.el9.x86_ 1.4 MB/s | 180 kB 00:00 2026-03-21T14:42:23.247 INFO:teuthology.orchestra.run.vm01.stdout:(27/138): ceph-grafana-dashboards-20.2.0-712.g7 366 kB/s | 43 kB 00:00 2026-03-21T14:42:23.354 INFO:teuthology.orchestra.run.vm01.stdout:(28/138): rbd-mirror-20.2.0-712.g70f8415b.el9.x 8.0 MB/s | 2.9 MB 00:00 2026-03-21T14:42:23.366 INFO:teuthology.orchestra.run.vm01.stdout:(29/138): ceph-mgr-cephadm-20.2.0-712.g70f8415b 1.4 MB/s | 173 kB 00:00 2026-03-21T14:42:23.887 INFO:teuthology.orchestra.run.vm01.stdout:(30/138): ceph-mgr-diskprediction-local-20.2.0- 14 MB/s | 7.4 MB 00:00 2026-03-21T14:42:23.931 INFO:teuthology.orchestra.run.vm05.stdout:(2/138): ceph-fuse-20.2.0-712.g70f8415b.el9.x86 1.1 MB/s | 939 kB 00:00 2026-03-21T14:42:24.009 INFO:teuthology.orchestra.run.vm01.stdout:(31/138): ceph-mgr-modules-core-20.2.0-712.g70f 2.3 MB/s | 290 kB 00:00 2026-03-21T14:42:24.049 INFO:teuthology.orchestra.run.vm05.stdout:(3/138): ceph-immutable-object-cache-20.2.0-712 1.3 MB/s | 154 kB 00:00 2026-03-21T14:42:24.128 INFO:teuthology.orchestra.run.vm01.stdout:(32/138): ceph-mgr-rook-20.2.0-712.g70f8415b.el 424 kB/s | 50 kB 00:00 2026-03-21T14:42:24.247 INFO:teuthology.orchestra.run.vm05.stdout:(4/138): ceph-base-20.2.0-712.g70f8415b.el9.x86 3.7 MB/s | 5.9 MB 00:01 2026-03-21T14:42:24.255 INFO:teuthology.orchestra.run.vm01.stdout:(33/138): ceph-prometheus-alerts-20.2.0-712.g70 136 kB/s | 17 kB 00:00 2026-03-21T14:42:24.392 INFO:teuthology.orchestra.run.vm05.stdout:(5/138): ceph-mgr-20.2.0-712.g70f8415b.el9.x86_ 6.5 MB/s | 962 kB 00:00 2026-03-21T14:42:24.401 INFO:teuthology.orchestra.run.vm01.stdout:(34/138): ceph-volume-20.2.0-712.g70f8415b.el9. 2.0 MB/s | 298 kB 00:00 2026-03-21T14:42:24.431 INFO:teuthology.orchestra.run.vm05.stdout:(6/138): ceph-mds-20.2.0-712.g70f8415b.el9.x86_ 6.1 MB/s | 2.3 MB 00:00 2026-03-21T14:42:24.465 INFO:teuthology.orchestra.run.vm01.stdout:(35/138): ceph-mgr-dashboard-20.2.0-712.g70f841 9.5 MB/s | 11 MB 00:01 2026-03-21T14:42:24.562 INFO:teuthology.orchestra.run.vm01.stdout:(36/138): cephadm-20.2.0-712.g70f8415b.el9.noar 6.2 MB/s | 1.0 MB 00:00 2026-03-21T14:42:24.636 INFO:teuthology.orchestra.run.vm01.stdout:(37/138): bzip2-1.0.8-11.el9.x86_64.rpm 322 kB/s | 55 kB 00:00 2026-03-21T14:42:24.689 INFO:teuthology.orchestra.run.vm01.stdout:(38/138): fuse-2.9.9-17.el9.x86_64.rpm 1.5 MB/s | 80 kB 00:00 2026-03-21T14:42:24.741 INFO:teuthology.orchestra.run.vm01.stdout:(39/138): ledmon-libs-1.1.0-3.el9.x86_64.rpm 786 kB/s | 40 kB 00:00 2026-03-21T14:42:24.759 INFO:teuthology.orchestra.run.vm01.stdout:(40/138): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.7 MB/s | 351 kB 00:00 2026-03-21T14:42:24.797 INFO:teuthology.orchestra.run.vm01.stdout:(41/138): libconfig-1.7.2-9.el9.x86_64.rpm 1.3 MB/s | 72 kB 00:00 2026-03-21T14:42:24.823 INFO:teuthology.orchestra.run.vm05.stdout:(7/138): ceph-mon-20.2.0-712.g70f8415b.el9.x86_ 12 MB/s | 5.0 MB 00:00 2026-03-21T14:42:24.860 INFO:teuthology.orchestra.run.vm01.stdout:(42/138): libgfortran-11.5.0-14.el9.x86_64.rpm 7.7 MB/s | 794 kB 00:00 2026-03-21T14:42:24.864 INFO:teuthology.orchestra.run.vm01.stdout:(43/138): libquadmath-11.5.0-14.el9.x86_64.rpm 2.7 MB/s | 184 kB 00:00 2026-03-21T14:42:24.909 INFO:teuthology.orchestra.run.vm01.stdout:(44/138): mailcap-2.1.49-5.el9.noarch.rpm 689 kB/s | 33 kB 00:00 2026-03-21T14:42:24.914 INFO:teuthology.orchestra.run.vm01.stdout:(45/138): pciutils-3.7.0-7.el9.x86_64.rpm 1.8 MB/s | 93 kB 00:00 2026-03-21T14:42:24.964 INFO:teuthology.orchestra.run.vm01.stdout:(46/138): python3-cffi-1.14.5-5.el9.x86_64.rpm 4.6 MB/s | 253 kB 00:00 2026-03-21T14:42:24.988 INFO:teuthology.orchestra.run.vm01.stdout:(47/138): python3-cryptography-36.0.1-5.el9.x86 17 MB/s | 1.2 MB 00:00 2026-03-21T14:42:25.014 INFO:teuthology.orchestra.run.vm01.stdout:(48/138): python3-ply-3.11-14.el9.noarch.rpm 2.1 MB/s | 106 kB 00:00 2026-03-21T14:42:25.040 INFO:teuthology.orchestra.run.vm01.stdout:(49/138): python3-pycparser-2.20-6.el9.noarch.r 2.5 MB/s | 135 kB 00:00 2026-03-21T14:42:25.064 INFO:teuthology.orchestra.run.vm01.stdout:(50/138): python3-pyparsing-2.4.7-9.el9.noarch. 2.9 MB/s | 150 kB 00:00 2026-03-21T14:42:25.091 INFO:teuthology.orchestra.run.vm01.stdout:(51/138): python3-requests-2.25.1-10.el9.noarch 2.4 MB/s | 126 kB 00:00 2026-03-21T14:42:25.119 INFO:teuthology.orchestra.run.vm01.stdout:(52/138): python3-urllib3-1.26.5-7.el9.noarch.r 3.9 MB/s | 218 kB 00:00 2026-03-21T14:42:25.142 INFO:teuthology.orchestra.run.vm01.stdout:(53/138): unzip-6.0-59.el9.x86_64.rpm 3.5 MB/s | 182 kB 00:00 2026-03-21T14:42:25.171 INFO:teuthology.orchestra.run.vm01.stdout:(54/138): zip-3.0-35.el9.x86_64.rpm 5.0 MB/s | 266 kB 00:00 2026-03-21T14:42:25.250 INFO:teuthology.orchestra.run.vm05.stdout:(8/138): ceph-common-20.2.0-712.g70f8415b.el9.x 9.1 MB/s | 24 MB 00:02 2026-03-21T14:42:25.361 INFO:teuthology.orchestra.run.vm05.stdout:(9/138): ceph-selinux-20.2.0-712.g70f8415b.el9. 225 kB/s | 25 kB 00:00 2026-03-21T14:42:25.378 INFO:teuthology.orchestra.run.vm01.stdout:(55/138): flexiblas-3.0.4-9.el9.x86_64.rpm 143 kB/s | 30 kB 00:00 2026-03-21T14:42:25.461 INFO:teuthology.orchestra.run.vm01.stdout:(56/138): boost-program-options-1.75.0-13.el9.x 327 kB/s | 104 kB 00:00 2026-03-21T14:42:25.533 INFO:teuthology.orchestra.run.vm01.stdout:(57/138): flexiblas-openblas-openmp-3.0.4-9.el9 206 kB/s | 15 kB 00:00 2026-03-21T14:42:25.679 INFO:teuthology.orchestra.run.vm01.stdout:(58/138): libnbd-1.20.3-4.el9.x86_64.rpm 1.1 MB/s | 164 kB 00:00 2026-03-21T14:42:25.762 INFO:teuthology.orchestra.run.vm05.stdout:(10/138): ceph-osd-20.2.0-712.g70f8415b.el9.x86 13 MB/s | 17 MB 00:01 2026-03-21T14:42:25.763 INFO:teuthology.orchestra.run.vm01.stdout:(59/138): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.9 MB/s | 160 kB 00:00 2026-03-21T14:42:25.841 INFO:teuthology.orchestra.run.vm01.stdout:(60/138): librabbitmq-0.11.0-7.el9.x86_64.rpm 582 kB/s | 45 kB 00:00 2026-03-21T14:42:25.890 INFO:teuthology.orchestra.run.vm05.stdout:(11/138): libcephfs-devel-20.2.0-712.g70f8415b. 268 kB/s | 34 kB 00:00 2026-03-21T14:42:25.914 INFO:teuthology.orchestra.run.vm01.stdout:(61/138): flexiblas-netlib-3.0.4-9.el9.x86_64.r 5.6 MB/s | 3.0 MB 00:00 2026-03-21T14:42:25.977 INFO:teuthology.orchestra.run.vm01.stdout:(62/138): librdkafka-1.6.1-102.el9.x86_64.rpm 4.8 MB/s | 662 kB 00:00 2026-03-21T14:42:25.986 INFO:teuthology.orchestra.run.vm01.stdout:(63/138): libstoragemgmt-1.10.1-1.el9.x86_64.rp 3.3 MB/s | 246 kB 00:00 2026-03-21T14:42:26.005 INFO:teuthology.orchestra.run.vm05.stdout:(12/138): libcephfs-proxy2-20.2.0-712.g70f8415b 211 kB/s | 24 kB 00:00 2026-03-21T14:42:26.068 INFO:teuthology.orchestra.run.vm01.stdout:(64/138): libxslt-1.1.34-12.el9.x86_64.rpm 2.5 MB/s | 233 kB 00:00 2026-03-21T14:42:26.072 INFO:teuthology.orchestra.run.vm01.stdout:(65/138): lttng-ust-2.12.0-6.el9.x86_64.rpm 3.3 MB/s | 292 kB 00:00 2026-03-21T14:42:26.132 INFO:teuthology.orchestra.run.vm05.stdout:(13/138): libcephfs2-20.2.0-712.g70f8415b.el9.x 6.7 MB/s | 866 kB 00:00 2026-03-21T14:42:26.142 INFO:teuthology.orchestra.run.vm01.stdout:(66/138): lua-5.4.4-4.el9.x86_64.rpm 2.5 MB/s | 188 kB 00:00 2026-03-21T14:42:26.161 INFO:teuthology.orchestra.run.vm01.stdout:(67/138): openblas-0.3.29-1.el9.x86_64.rpm 474 kB/s | 42 kB 00:00 2026-03-21T14:42:26.236 INFO:teuthology.orchestra.run.vm01.stdout:(68/138): perl-Benchmark-1.23-483.el9.noarch.rp 355 kB/s | 26 kB 00:00 2026-03-21T14:42:26.249 INFO:teuthology.orchestra.run.vm05.stdout:(14/138): libcephsqlite-20.2.0-712.g70f8415b.el 1.4 MB/s | 164 kB 00:00 2026-03-21T14:42:26.308 INFO:teuthology.orchestra.run.vm01.stdout:(69/138): perl-Test-Harness-3.42-461.el9.noarch 4.0 MB/s | 295 kB 00:00 2026-03-21T14:42:26.366 INFO:teuthology.orchestra.run.vm05.stdout:(15/138): librados-devel-20.2.0-712.g70f8415b.e 1.1 MB/s | 126 kB 00:00 2026-03-21T14:42:26.522 INFO:teuthology.orchestra.run.vm05.stdout:(16/138): libradosstriper1-20.2.0-712.g70f8415b 1.6 MB/s | 250 kB 00:00 2026-03-21T14:42:26.555 INFO:teuthology.orchestra.run.vm01.stdout:(70/138): ceph-test-20.2.0-712.g70f8415b.el9.x8 15 MB/s | 84 MB 00:05 2026-03-21T14:42:26.561 INFO:teuthology.orchestra.run.vm01.stdout:(71/138): protobuf-3.14.0-17.el9.x86_64.rpm 4.0 MB/s | 1.0 MB 00:00 2026-03-21T14:42:26.582 INFO:teuthology.orchestra.run.vm01.stdout:(72/138): openblas-openmp-0.3.29-1.el9.x86_64.r 12 MB/s | 5.3 MB 00:00 2026-03-21T14:42:26.631 INFO:teuthology.orchestra.run.vm01.stdout:(73/138): python3-devel-3.9.25-3.el9.x86_64.rpm 3.4 MB/s | 244 kB 00:00 2026-03-21T14:42:26.654 INFO:teuthology.orchestra.run.vm01.stdout:(74/138): python3-jinja2-2.11.3-8.el9.noarch.rp 3.4 MB/s | 249 kB 00:00 2026-03-21T14:42:26.748 INFO:teuthology.orchestra.run.vm01.stdout:(75/138): python3-jmespath-1.0.1-1.el9.noarch.r 408 kB/s | 48 kB 00:00 2026-03-21T14:42:26.749 INFO:teuthology.orchestra.run.vm05.stdout:(17/138): ceph-radosgw-20.2.0-712.g70f8415b.el9 12 MB/s | 24 MB 00:01 2026-03-21T14:42:26.749 INFO:teuthology.orchestra.run.vm01.stdout:(76/138): python3-libstoragemgmt-1.10.1-1.el9.x 1.8 MB/s | 177 kB 00:00 2026-03-21T14:42:26.814 INFO:teuthology.orchestra.run.vm01.stdout:(77/138): python3-markupsafe-1.1.1-12.el9.x86_6 524 kB/s | 35 kB 00:00 2026-03-21T14:42:26.881 INFO:teuthology.orchestra.run.vm05.stdout:(18/138): python3-ceph-argparse-20.2.0-712.g70f 341 kB/s | 45 kB 00:00 2026-03-21T14:42:26.888 INFO:teuthology.orchestra.run.vm01.stdout:(78/138): python3-numpy-f2py-1.23.5-2.el9.x86_6 5.9 MB/s | 442 kB 00:00 2026-03-21T14:42:26.907 INFO:teuthology.orchestra.run.vm05.stdout:(19/138): librgw2-20.2.0-712.g70f8415b.el9.x86_ 17 MB/s | 6.4 MB 00:00 2026-03-21T14:42:26.956 INFO:teuthology.orchestra.run.vm01.stdout:(79/138): python3-packaging-20.9-5.el9.noarch.r 1.1 MB/s | 77 kB 00:00 2026-03-21T14:42:27.004 INFO:teuthology.orchestra.run.vm05.stdout:(20/138): python3-ceph-common-20.2.0-712.g70f84 1.4 MB/s | 175 kB 00:00 2026-03-21T14:42:27.029 INFO:teuthology.orchestra.run.vm05.stdout:(21/138): python3-cephfs-20.2.0-712.g70f8415b.e 1.3 MB/s | 163 kB 00:00 2026-03-21T14:42:27.036 INFO:teuthology.orchestra.run.vm01.stdout:(80/138): python3-protobuf-3.14.0-17.el9.noarch 3.3 MB/s | 267 kB 00:00 2026-03-21T14:42:27.105 INFO:teuthology.orchestra.run.vm01.stdout:(81/138): python3-pyasn1-0.4.8-7.el9.noarch.rpm 2.2 MB/s | 157 kB 00:00 2026-03-21T14:42:27.129 INFO:teuthology.orchestra.run.vm05.stdout:(22/138): python3-rados-20.2.0-712.g70f8415b.el 2.5 MB/s | 324 kB 00:00 2026-03-21T14:42:27.148 INFO:teuthology.orchestra.run.vm05.stdout:(23/138): python3-rbd-20.2.0-712.g70f8415b.el9. 2.5 MB/s | 304 kB 00:00 2026-03-21T14:42:27.176 INFO:teuthology.orchestra.run.vm01.stdout:(82/138): python3-pyasn1-modules-0.4.8-7.el9.no 3.9 MB/s | 277 kB 00:00 2026-03-21T14:42:27.192 INFO:teuthology.orchestra.run.vm01.stdout:(83/138): python3-numpy-1.23.5-2.el9.x86_64.rpm 14 MB/s | 6.1 MB 00:00 2026-03-21T14:42:27.242 INFO:teuthology.orchestra.run.vm01.stdout:(84/138): python3-requests-oauthlib-1.3.0-12.el 811 kB/s | 54 kB 00:00 2026-03-21T14:42:27.251 INFO:teuthology.orchestra.run.vm05.stdout:(24/138): python3-rgw-20.2.0-712.g70f8415b.el9. 812 kB/s | 99 kB 00:00 2026-03-21T14:42:27.264 INFO:teuthology.orchestra.run.vm05.stdout:(25/138): rbd-fuse-20.2.0-712.g70f8415b.el9.x86 786 kB/s | 91 kB 00:00 2026-03-21T14:42:27.309 INFO:teuthology.orchestra.run.vm01.stdout:(85/138): python3-toml-0.10.2-6.el9.noarch.rpm 632 kB/s | 42 kB 00:00 2026-03-21T14:42:27.397 INFO:teuthology.orchestra.run.vm01.stdout:(86/138): python3-babel-2.9.1-2.el9.noarch.rpm 7.1 MB/s | 6.0 MB 00:00 2026-03-21T14:42:27.398 INFO:teuthology.orchestra.run.vm05.stdout:(26/138): rbd-nbd-20.2.0-712.g70f8415b.el9.x86_ 1.3 MB/s | 180 kB 00:00 2026-03-21T14:42:27.399 INFO:teuthology.orchestra.run.vm01.stdout:(87/138): qatlib-25.08.0-2.el9.x86_64.rpm 2.6 MB/s | 240 kB 00:00 2026-03-21T14:42:27.463 INFO:teuthology.orchestra.run.vm01.stdout:(88/138): qatlib-service-25.08.0-2.el9.x86_64.r 568 kB/s | 37 kB 00:00 2026-03-21T14:42:27.465 INFO:teuthology.orchestra.run.vm01.stdout:(89/138): qatzip-libs-1.3.1-1.el9.x86_64.rpm 1.0 MB/s | 66 kB 00:00 2026-03-21T14:42:27.514 INFO:teuthology.orchestra.run.vm05.stdout:(27/138): rbd-mirror-20.2.0-712.g70f8415b.el9.x 11 MB/s | 2.9 MB 00:00 2026-03-21T14:42:27.514 INFO:teuthology.orchestra.run.vm05.stdout:(28/138): ceph-grafana-dashboards-20.2.0-712.g7 370 kB/s | 43 kB 00:00 2026-03-21T14:42:27.547 INFO:teuthology.orchestra.run.vm01.stdout:(90/138): socat-1.7.4.1-8.el9.x86_64.rpm 3.5 MB/s | 303 kB 00:00 2026-03-21T14:42:27.548 INFO:teuthology.orchestra.run.vm01.stdout:(91/138): xmlstarlet-1.6.1-20.el9.x86_64.rpm 770 kB/s | 64 kB 00:00 2026-03-21T14:42:27.586 INFO:teuthology.orchestra.run.vm01.stdout:(92/138): lua-devel-5.4.4-4.el9.x86_64.rpm 569 kB/s | 22 kB 00:00 2026-03-21T14:42:27.610 INFO:teuthology.orchestra.run.vm01.stdout:(93/138): protobuf-compiler-3.14.0-17.el9.x86_6 14 MB/s | 862 kB 00:00 2026-03-21T14:42:27.616 INFO:teuthology.orchestra.run.vm01.stdout:(94/138): abseil-cpp-20211102.0-4.el9.x86_64.rp 19 MB/s | 551 kB 00:00 2026-03-21T14:42:27.619 INFO:teuthology.orchestra.run.vm01.stdout:(95/138): grpc-data-1.46.7-10.el9.noarch.rpm 6.3 MB/s | 19 kB 00:00 2026-03-21T14:42:27.621 INFO:teuthology.orchestra.run.vm01.stdout:(96/138): gperftools-libs-2.9.1-3.el9.x86_64.rp 28 MB/s | 308 kB 00:00 2026-03-21T14:42:27.624 INFO:teuthology.orchestra.run.vm01.stdout:(97/138): libarrow-doc-9.0.0-15.el9.noarch.rpm 9.0 MB/s | 25 kB 00:00 2026-03-21T14:42:27.628 INFO:teuthology.orchestra.run.vm01.stdout:(98/138): liboath-2.6.12-1.el9.x86_64.rpm 14 MB/s | 49 kB 00:00 2026-03-21T14:42:27.636 INFO:teuthology.orchestra.run.vm01.stdout:(99/138): libunwind-1.6.2-1.el9.x86_64.rpm 8.9 MB/s | 67 kB 00:00 2026-03-21T14:42:27.642 INFO:teuthology.orchestra.run.vm01.stdout:(100/138): luarocks-3.9.2-5.el9.noarch.rpm 25 MB/s | 151 kB 00:00 2026-03-21T14:42:27.683 INFO:teuthology.orchestra.run.vm05.stdout:(29/138): ceph-mgr-cephadm-20.2.0-712.g70f8415b 1.0 MB/s | 173 kB 00:00 2026-03-21T14:42:27.687 INFO:teuthology.orchestra.run.vm01.stdout:(101/138): parquet-libs-9.0.0-15.el9.x86_64.rpm 18 MB/s | 838 kB 00:00 2026-03-21T14:42:27.700 INFO:teuthology.orchestra.run.vm01.stdout:(102/138): python3-asyncssh-2.13.2-5.el9.noarch 41 MB/s | 548 kB 00:00 2026-03-21T14:42:27.703 INFO:teuthology.orchestra.run.vm01.stdout:(103/138): python3-autocommand-2.2.2-8.el9.noar 9.4 MB/s | 29 kB 00:00 2026-03-21T14:42:27.708 INFO:teuthology.orchestra.run.vm01.stdout:(104/138): python3-backports-tarfile-1.2.0-1.el 13 MB/s | 60 kB 00:00 2026-03-21T14:42:27.711 INFO:teuthology.orchestra.run.vm01.stdout:(105/138): python3-bcrypt-3.2.2-1.el9.x86_64.rp 13 MB/s | 43 kB 00:00 2026-03-21T14:42:27.714 INFO:teuthology.orchestra.run.vm01.stdout:(106/138): python3-cachetools-4.2.4-1.el9.noarc 13 MB/s | 32 kB 00:00 2026-03-21T14:42:27.716 INFO:teuthology.orchestra.run.vm01.stdout:(107/138): python3-certifi-2023.05.07-4.el9.noa 6.5 MB/s | 14 kB 00:00 2026-03-21T14:42:27.722 INFO:teuthology.orchestra.run.vm01.stdout:(108/138): python3-cheroot-10.0.1-4.el9.noarch. 30 MB/s | 173 kB 00:00 2026-03-21T14:42:27.736 INFO:teuthology.orchestra.run.vm01.stdout:(109/138): python3-cherrypy-18.6.1-2.el9.noarch 26 MB/s | 358 kB 00:00 2026-03-21T14:42:27.744 INFO:teuthology.orchestra.run.vm01.stdout:(110/138): python3-google-auth-2.45.0-1.el9.noa 29 MB/s | 254 kB 00:00 2026-03-21T14:42:27.775 INFO:teuthology.orchestra.run.vm01.stdout:(111/138): libarrow-9.0.0-15.el9.x86_64.rpm 28 MB/s | 4.4 MB 00:00 2026-03-21T14:42:27.782 INFO:teuthology.orchestra.run.vm01.stdout:(112/138): python3-grpcio-tools-1.46.7-10.el9.x 22 MB/s | 144 kB 00:00 2026-03-21T14:42:27.785 INFO:teuthology.orchestra.run.vm01.stdout:(113/138): python3-jaraco-8.2.1-3.el9.noarch.rp 3.6 MB/s | 11 kB 00:00 2026-03-21T14:42:27.789 INFO:teuthology.orchestra.run.vm01.stdout:(114/138): python3-jaraco-classes-3.2.1-5.el9.n 4.3 MB/s | 18 kB 00:00 2026-03-21T14:42:27.795 INFO:teuthology.orchestra.run.vm01.stdout:(115/138): python3-jaraco-collections-3.0.0-8.e 4.3 MB/s | 23 kB 00:00 2026-03-21T14:42:27.800 INFO:teuthology.orchestra.run.vm01.stdout:(116/138): python3-jaraco-context-6.0.1-3.el9.n 3.6 MB/s | 20 kB 00:00 2026-03-21T14:42:27.804 INFO:teuthology.orchestra.run.vm01.stdout:(117/138): python3-jaraco-functools-3.5.0-2.el9 5.6 MB/s | 19 kB 00:00 2026-03-21T14:42:27.813 INFO:teuthology.orchestra.run.vm01.stdout:(118/138): python3-jaraco-text-4.0.0-2.el9.noar 2.8 MB/s | 26 kB 00:00 2026-03-21T14:42:27.832 INFO:teuthology.orchestra.run.vm01.stdout:(119/138): python3-grpcio-1.46.7-10.el9.x86_64. 23 MB/s | 2.0 MB 00:00 2026-03-21T14:42:27.836 INFO:teuthology.orchestra.run.vm01.stdout:(120/138): python3-more-itertools-8.12.0-2.el9. 20 MB/s | 79 kB 00:00 2026-03-21T14:42:27.839 INFO:teuthology.orchestra.run.vm01.stdout:(121/138): python3-natsort-7.1.1-5.el9.noarch.r 18 MB/s | 58 kB 00:00 2026-03-21T14:42:27.844 INFO:teuthology.orchestra.run.vm01.stdout:(122/138): python3-kubernetes-26.1.0-3.el9.noar 33 MB/s | 1.0 MB 00:00 2026-03-21T14:42:27.845 INFO:teuthology.orchestra.run.vm01.stdout:(123/138): python3-portend-3.1.0-2.el9.noarch.r 2.8 MB/s | 16 kB 00:00 2026-03-21T14:42:27.849 INFO:teuthology.orchestra.run.vm01.stdout:(124/138): python3-pyOpenSSL-21.0.0-1.el9.noarc 21 MB/s | 90 kB 00:00 2026-03-21T14:42:27.850 INFO:teuthology.orchestra.run.vm01.stdout:(125/138): python3-repoze-lru-0.7-16.el9.noarch 6.0 MB/s | 31 kB 00:00 2026-03-21T14:42:27.854 INFO:teuthology.orchestra.run.vm01.stdout:(126/138): python3-rsa-4.9-2.el9.noarch.rpm 17 MB/s | 59 kB 00:00 2026-03-21T14:42:27.855 INFO:teuthology.orchestra.run.vm01.stdout:(127/138): python3-routes-2.5.1-5.el9.noarch.rp 29 MB/s | 188 kB 00:00 2026-03-21T14:42:27.857 INFO:teuthology.orchestra.run.vm01.stdout:(128/138): python3-tempora-5.0.0-2.el9.noarch.r 11 MB/s | 36 kB 00:00 2026-03-21T14:42:27.860 INFO:teuthology.orchestra.run.vm01.stdout:(129/138): python3-typing-extensions-4.15.0-1.e 21 MB/s | 86 kB 00:00 2026-03-21T14:42:27.862 INFO:teuthology.orchestra.run.vm01.stdout:(130/138): python3-websocket-client-1.2.3-2.el9 19 MB/s | 90 kB 00:00 2026-03-21T14:42:27.863 INFO:teuthology.orchestra.run.vm01.stdout:(131/138): python3-xmltodict-0.12.0-15.el9.noar 6.8 MB/s | 22 kB 00:00 2026-03-21T14:42:27.865 INFO:teuthology.orchestra.run.vm01.stdout:(132/138): python3-zc-lockfile-2.0-10.el9.noarc 6.7 MB/s | 20 kB 00:00 2026-03-21T14:42:27.869 INFO:teuthology.orchestra.run.vm01.stdout:(133/138): re2-20211101-20.el9.x86_64.rpm 33 MB/s | 191 kB 00:00 2026-03-21T14:42:27.874 INFO:teuthology.orchestra.run.vm01.stdout:(134/138): s3cmd-2.4.0-1.el9.noarch.rpm 23 MB/s | 206 kB 00:00 2026-03-21T14:42:27.896 INFO:teuthology.orchestra.run.vm01.stdout:(135/138): thrift-0.15.0-4.el9.x86_64.rpm 59 MB/s | 1.6 MB 00:00 2026-03-21T14:42:28.324 INFO:teuthology.orchestra.run.vm05.stdout:(30/138): ceph-mgr-dashboard-20.2.0-712.g70f841 13 MB/s | 11 MB 00:00 2026-03-21T14:42:28.442 INFO:teuthology.orchestra.run.vm05.stdout:(31/138): ceph-mgr-modules-core-20.2.0-712.g70f 2.4 MB/s | 290 kB 00:00 2026-03-21T14:42:28.556 INFO:teuthology.orchestra.run.vm05.stdout:(32/138): ceph-mgr-rook-20.2.0-712.g70f8415b.el 438 kB/s | 50 kB 00:00 2026-03-21T14:42:28.664 INFO:teuthology.orchestra.run.vm05.stdout:(33/138): ceph-mgr-diskprediction-local-20.2.0- 7.5 MB/s | 7.4 MB 00:00 2026-03-21T14:42:28.670 INFO:teuthology.orchestra.run.vm05.stdout:(34/138): ceph-prometheus-alerts-20.2.0-712.g70 153 kB/s | 17 kB 00:00 2026-03-21T14:42:28.803 INFO:teuthology.orchestra.run.vm05.stdout:(35/138): cephadm-20.2.0-712.g70f8415b.el9.noar 7.5 MB/s | 1.0 MB 00:00 2026-03-21T14:42:28.805 INFO:teuthology.orchestra.run.vm05.stdout:(36/138): ceph-volume-20.2.0-712.g70f8415b.el9. 2.1 MB/s | 298 kB 00:00 2026-03-21T14:42:28.857 INFO:teuthology.orchestra.run.vm01.stdout:(136/138): librbd1-20.2.0-712.g70f8415b.el9.x86 3.0 MB/s | 2.8 MB 00:00 2026-03-21T14:42:28.939 INFO:teuthology.orchestra.run.vm01.stdout:(137/138): librados2-20.2.0-712.g70f8415b.el9.x 3.3 MB/s | 3.5 MB 00:01 2026-03-21T14:42:29.126 INFO:teuthology.orchestra.run.vm01.stdout:(138/138): python3-scipy-1.9.3-2.el9.x86_64.rpm 10 MB/s | 19 MB 00:01 2026-03-21T14:42:29.129 INFO:teuthology.orchestra.run.vm01.stdout:-------------------------------------------------------------------------------- 2026-03-21T14:42:29.129 INFO:teuthology.orchestra.run.vm01.stdout:Total 23 MB/s | 267 MB 00:11 2026-03-21T14:42:29.191 INFO:teuthology.orchestra.run.vm05.stdout:(37/138): bzip2-1.0.8-11.el9.x86_64.rpm 141 kB/s | 55 kB 00:00 2026-03-21T14:42:29.347 INFO:teuthology.orchestra.run.vm05.stdout:(38/138): fuse-2.9.9-17.el9.x86_64.rpm 512 kB/s | 80 kB 00:00 2026-03-21T14:42:29.376 INFO:teuthology.orchestra.run.vm05.stdout:(39/138): cryptsetup-2.8.1-3.el9.x86_64.rpm 615 kB/s | 351 kB 00:00 2026-03-21T14:42:29.501 INFO:teuthology.orchestra.run.vm05.stdout:(40/138): ledmon-libs-1.1.0-3.el9.x86_64.rpm 263 kB/s | 40 kB 00:00 2026-03-21T14:42:29.543 INFO:teuthology.orchestra.run.vm05.stdout:(41/138): libconfig-1.7.2-9.el9.x86_64.rpm 432 kB/s | 72 kB 00:00 2026-03-21T14:42:29.677 INFO:teuthology.orchestra.run.vm05.stdout:(42/138): libquadmath-11.5.0-14.el9.x86_64.rpm 1.3 MB/s | 184 kB 00:00 2026-03-21T14:42:29.727 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction check 2026-03-21T14:42:29.752 INFO:teuthology.orchestra.run.vm05.stdout:(43/138): mailcap-2.1.49-5.el9.noarch.rpm 446 kB/s | 33 kB 00:00 2026-03-21T14:42:29.788 INFO:teuthology.orchestra.run.vm01.stdout:Transaction check succeeded. 2026-03-21T14:42:29.788 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction test 2026-03-21T14:42:29.827 INFO:teuthology.orchestra.run.vm05.stdout:(44/138): libgfortran-11.5.0-14.el9.x86_64.rpm 2.4 MB/s | 794 kB 00:00 2026-03-21T14:42:29.845 INFO:teuthology.orchestra.run.vm05.stdout:(45/138): pciutils-3.7.0-7.el9.x86_64.rpm 997 kB/s | 93 kB 00:00 2026-03-21T14:42:29.927 INFO:teuthology.orchestra.run.vm05.stdout:(46/138): python3-cffi-1.14.5-5.el9.x86_64.rpm 2.5 MB/s | 253 kB 00:00 2026-03-21T14:42:30.019 INFO:teuthology.orchestra.run.vm05.stdout:(47/138): python3-ply-3.11-14.el9.noarch.rpm 1.1 MB/s | 106 kB 00:00 2026-03-21T14:42:30.099 INFO:teuthology.orchestra.run.vm05.stdout:(48/138): python3-cryptography-36.0.1-5.el9.x86 4.9 MB/s | 1.2 MB 00:00 2026-03-21T14:42:30.128 INFO:teuthology.orchestra.run.vm05.stdout:(49/138): python3-pycparser-2.20-6.el9.noarch.r 1.2 MB/s | 135 kB 00:00 2026-03-21T14:42:30.179 INFO:teuthology.orchestra.run.vm05.stdout:(50/138): python3-pyparsing-2.4.7-9.el9.noarch. 1.8 MB/s | 150 kB 00:00 2026-03-21T14:42:30.215 INFO:teuthology.orchestra.run.vm05.stdout:(51/138): python3-requests-2.25.1-10.el9.noarch 1.4 MB/s | 126 kB 00:00 2026-03-21T14:42:30.330 INFO:teuthology.orchestra.run.vm05.stdout:(52/138): python3-urllib3-1.26.5-7.el9.noarch.r 1.4 MB/s | 218 kB 00:00 2026-03-21T14:42:30.335 INFO:teuthology.orchestra.run.vm05.stdout:(53/138): unzip-6.0-59.el9.x86_64.rpm 1.5 MB/s | 182 kB 00:00 2026-03-21T14:42:30.442 INFO:teuthology.orchestra.run.vm05.stdout:(54/138): zip-3.0-35.el9.x86_64.rpm 2.3 MB/s | 266 kB 00:00 2026-03-21T14:42:30.654 INFO:teuthology.orchestra.run.vm05.stdout:(55/138): flexiblas-3.0.4-9.el9.x86_64.rpm 140 kB/s | 30 kB 00:00 2026-03-21T14:42:30.656 INFO:teuthology.orchestra.run.vm05.stdout:(56/138): boost-program-options-1.75.0-13.el9.x 325 kB/s | 104 kB 00:00 2026-03-21T14:42:30.723 INFO:teuthology.orchestra.run.vm05.stdout:(57/138): flexiblas-openblas-openmp-3.0.4-9.el9 221 kB/s | 15 kB 00:00 2026-03-21T14:42:30.872 INFO:teuthology.orchestra.run.vm05.stdout:(58/138): libnbd-1.20.3-4.el9.x86_64.rpm 1.1 MB/s | 164 kB 00:00 2026-03-21T14:42:30.887 INFO:teuthology.orchestra.run.vm01.stdout:Transaction test succeeded. 2026-03-21T14:42:30.887 INFO:teuthology.orchestra.run.vm01.stdout:Running transaction 2026-03-21T14:42:30.968 INFO:teuthology.orchestra.run.vm05.stdout:(59/138): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.7 MB/s | 160 kB 00:00 2026-03-21T14:42:31.034 INFO:teuthology.orchestra.run.vm05.stdout:(60/138): librabbitmq-0.11.0-7.el9.x86_64.rpm 690 kB/s | 45 kB 00:00 2026-03-21T14:42:31.115 INFO:teuthology.orchestra.run.vm05.stdout:(61/138): flexiblas-netlib-3.0.4-9.el9.x86_64.r 6.5 MB/s | 3.0 MB 00:00 2026-03-21T14:42:31.172 INFO:teuthology.orchestra.run.vm05.stdout:(62/138): librdkafka-1.6.1-102.el9.x86_64.rpm 4.7 MB/s | 662 kB 00:00 2026-03-21T14:42:31.192 INFO:teuthology.orchestra.run.vm05.stdout:(63/138): libstoragemgmt-1.10.1-1.el9.x86_64.rp 3.2 MB/s | 246 kB 00:00 2026-03-21T14:42:31.261 INFO:teuthology.orchestra.run.vm05.stdout:(64/138): libxslt-1.1.34-12.el9.x86_64.rpm 2.6 MB/s | 233 kB 00:00 2026-03-21T14:42:31.269 INFO:teuthology.orchestra.run.vm05.stdout:(65/138): lttng-ust-2.12.0-6.el9.x86_64.rpm 3.7 MB/s | 292 kB 00:00 2026-03-21T14:42:31.331 INFO:teuthology.orchestra.run.vm05.stdout:(66/138): lua-5.4.4-4.el9.x86_64.rpm 2.7 MB/s | 188 kB 00:00 2026-03-21T14:42:31.335 INFO:teuthology.orchestra.run.vm05.stdout:(67/138): openblas-0.3.29-1.el9.x86_64.rpm 637 kB/s | 42 kB 00:00 2026-03-21T14:42:31.400 INFO:teuthology.orchestra.run.vm05.stdout:(68/138): perl-Benchmark-1.23-483.el9.noarch.rp 409 kB/s | 26 kB 00:00 2026-03-21T14:42:31.473 INFO:teuthology.orchestra.run.vm05.stdout:(69/138): perl-Test-Harness-3.42-461.el9.noarch 4.0 MB/s | 295 kB 00:00 2026-03-21T14:42:31.557 INFO:teuthology.orchestra.run.vm05.stdout:(70/138): protobuf-3.14.0-17.el9.x86_64.rpm 12 MB/s | 1.0 MB 00:00 2026-03-21T14:42:31.738 INFO:teuthology.orchestra.run.vm05.stdout:(71/138): openblas-openmp-0.3.29-1.el9.x86_64.r 13 MB/s | 5.3 MB 00:00 2026-03-21T14:42:31.807 INFO:teuthology.orchestra.run.vm05.stdout:(72/138): python3-devel-3.9.25-3.el9.x86_64.rpm 3.4 MB/s | 244 kB 00:00 2026-03-21T14:42:31.879 INFO:teuthology.orchestra.run.vm05.stdout:(73/138): python3-jinja2-2.11.3-8.el9.noarch.rp 3.4 MB/s | 249 kB 00:00 2026-03-21T14:42:31.946 INFO:teuthology.orchestra.run.vm05.stdout:(74/138): python3-jmespath-1.0.1-1.el9.noarch.r 721 kB/s | 48 kB 00:00 2026-03-21T14:42:32.014 INFO:teuthology.orchestra.run.vm05.stdout:(75/138): python3-babel-2.9.1-2.el9.noarch.rpm 13 MB/s | 6.0 MB 00:00 2026-03-21T14:42:32.017 INFO:teuthology.orchestra.run.vm05.stdout:(76/138): python3-libstoragemgmt-1.10.1-1.el9.x 2.4 MB/s | 177 kB 00:00 2026-03-21T14:42:32.090 INFO:teuthology.orchestra.run.vm05.stdout:(77/138): python3-markupsafe-1.1.1-12.el9.x86_6 458 kB/s | 35 kB 00:00 2026-03-21T14:42:32.209 INFO:teuthology.orchestra.run.vm01.stdout: Preparing : 1/1 2026-03-21T14:42:32.228 INFO:teuthology.orchestra.run.vm05.stdout:(78/138): python3-numpy-f2py-1.23.5-2.el9.x86_6 3.1 MB/s | 442 kB 00:00 2026-03-21T14:42:32.244 INFO:teuthology.orchestra.run.vm01.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/140 2026-03-21T14:42:32.263 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/140 2026-03-21T14:42:32.303 INFO:teuthology.orchestra.run.vm01.stdout: Installing : liboath-2.6.12-1.el9.x86_64 3/140 2026-03-21T14:42:32.328 INFO:teuthology.orchestra.run.vm05.stdout:(79/138): python3-packaging-20.9-5.el9.noarch.r 774 kB/s | 77 kB 00:00 2026-03-21T14:42:32.399 INFO:teuthology.orchestra.run.vm05.stdout:(80/138): python3-protobuf-3.14.0-17.el9.noarch 3.7 MB/s | 267 kB 00:00 2026-03-21T14:42:32.435 INFO:teuthology.orchestra.run.vm05.stdout:(81/138): python3-numpy-1.23.5-2.el9.x86_64.rpm 15 MB/s | 6.1 MB 00:00 2026-03-21T14:42:32.470 INFO:teuthology.orchestra.run.vm05.stdout:(82/138): python3-pyasn1-0.4.8-7.el9.noarch.rpm 2.2 MB/s | 157 kB 00:00 2026-03-21T14:42:32.495 INFO:teuthology.orchestra.run.vm01.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 4/140 2026-03-21T14:42:32.497 INFO:teuthology.orchestra.run.vm01.stdout: Upgrading : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-21T14:42:32.510 INFO:teuthology.orchestra.run.vm05.stdout:(83/138): python3-pyasn1-modules-0.4.8-7.el9.no 3.7 MB/s | 277 kB 00:00 2026-03-21T14:42:32.536 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-21T14:42:32.541 INFO:teuthology.orchestra.run.vm05.stdout:(84/138): python3-requests-oauthlib-1.3.0-12.el 761 kB/s | 54 kB 00:00 2026-03-21T14:42:32.547 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-21T14:42:32.551 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/140 2026-03-21T14:42:32.557 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/140 2026-03-21T14:42:32.560 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 9/140 2026-03-21T14:42:32.566 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 10/140 2026-03-21T14:42:32.624 INFO:teuthology.orchestra.run.vm05.stdout:(85/138): python3-toml-0.10.2-6.el9.noarch.rpm 502 kB/s | 42 kB 00:00 2026-03-21T14:42:32.693 INFO:teuthology.orchestra.run.vm05.stdout:(86/138): qatlib-25.08.0-2.el9.x86_64.rpm 3.4 MB/s | 240 kB 00:00 2026-03-21T14:42:32.725 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/140 2026-03-21T14:42:32.728 INFO:teuthology.orchestra.run.vm01.stdout: Upgrading : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-21T14:42:32.750 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-21T14:42:32.752 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-21T14:42:32.759 INFO:teuthology.orchestra.run.vm05.stdout:(87/138): qatlib-service-25.08.0-2.el9.x86_64.r 569 kB/s | 37 kB 00:00 2026-03-21T14:42:32.783 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-21T14:42:32.784 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-21T14:42:32.803 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-21T14:42:32.825 INFO:teuthology.orchestra.run.vm05.stdout:(88/138): qatzip-libs-1.3.1-1.el9.x86_64.rpm 1.0 MB/s | 66 kB 00:00 2026-03-21T14:42:32.844 INFO:teuthology.orchestra.run.vm01.stdout: Installing : re2-1:20211101-20.el9.x86_64 15/140 2026-03-21T14:42:32.885 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 16/140 2026-03-21T14:42:32.896 INFO:teuthology.orchestra.run.vm05.stdout:(89/138): socat-1.7.4.1-8.el9.x86_64.rpm 4.2 MB/s | 303 kB 00:00 2026-03-21T14:42:32.899 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/140 2026-03-21T14:42:32.907 INFO:teuthology.orchestra.run.vm01.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 18/140 2026-03-21T14:42:32.910 INFO:teuthology.orchestra.run.vm01.stdout: Installing : lua-5.4.4-4.el9.x86_64 19/140 2026-03-21T14:42:32.917 INFO:teuthology.orchestra.run.vm01.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 20/140 2026-03-21T14:42:32.948 INFO:teuthology.orchestra.run.vm01.stdout: Installing : unzip-6.0-59.el9.x86_64 21/140 2026-03-21T14:42:32.963 INFO:teuthology.orchestra.run.vm05.stdout:(90/138): xmlstarlet-1.6.1-20.el9.x86_64.rpm 954 kB/s | 64 kB 00:00 2026-03-21T14:42:32.968 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 22/140 2026-03-21T14:42:32.974 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 23/140 2026-03-21T14:42:32.983 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 24/140 2026-03-21T14:42:32.987 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 25/140 2026-03-21T14:42:33.030 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 26/140 2026-03-21T14:42:33.041 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 27/140 2026-03-21T14:42:33.041 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 28/140 2026-03-21T14:42:33.043 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-21T14:42:33.099 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-21T14:42:33.101 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-21T14:42:33.126 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-21T14:42:33.141 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 31/140 2026-03-21T14:42:33.153 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/140 2026-03-21T14:42:33.185 INFO:teuthology.orchestra.run.vm01.stdout: Installing : zip-3.0-35.el9.x86_64 33/140 2026-03-21T14:42:33.192 INFO:teuthology.orchestra.run.vm01.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/140 2026-03-21T14:42:33.200 INFO:teuthology.orchestra.run.vm01.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/140 2026-03-21T14:42:33.264 INFO:teuthology.orchestra.run.vm01.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/140 2026-03-21T14:42:33.283 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 37/140 2026-03-21T14:42:33.303 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rsa-4.9-2.el9.noarch 38/140 2026-03-21T14:42:33.311 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 39/140 2026-03-21T14:42:33.320 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/140 2026-03-21T14:42:33.328 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 41/140 2026-03-21T14:42:33.332 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/140 2026-03-21T14:42:33.351 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/140 2026-03-21T14:42:33.359 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/140 2026-03-21T14:42:33.367 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 45/140 2026-03-21T14:42:33.383 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 46/140 2026-03-21T14:42:33.397 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 47/140 2026-03-21T14:42:33.405 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 48/140 2026-03-21T14:42:33.416 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 49/140 2026-03-21T14:42:33.471 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 50/140 2026-03-21T14:42:33.604 INFO:teuthology.orchestra.run.vm05.stdout:(91/138): ceph-test-20.2.0-712.g70f8415b.el9.x8 10 MB/s | 84 MB 00:08 2026-03-21T14:42:33.660 INFO:teuthology.orchestra.run.vm05.stdout:(92/138): python3-scipy-1.9.3-2.el9.x86_64.rpm 17 MB/s | 19 MB 00:01 2026-03-21T14:42:33.674 INFO:teuthology.orchestra.run.vm05.stdout:(93/138): abseil-cpp-20211102.0-4.el9.x86_64.rp 38 MB/s | 551 kB 00:00 2026-03-21T14:42:33.686 INFO:teuthology.orchestra.run.vm05.stdout:(94/138): gperftools-libs-2.9.1-3.el9.x86_64.rp 27 MB/s | 308 kB 00:00 2026-03-21T14:42:33.688 INFO:teuthology.orchestra.run.vm05.stdout:(95/138): grpc-data-1.46.7-10.el9.noarch.rpm 8.5 MB/s | 19 kB 00:00 2026-03-21T14:42:33.745 INFO:teuthology.orchestra.run.vm05.stdout:(96/138): libarrow-9.0.0-15.el9.x86_64.rpm 78 MB/s | 4.4 MB 00:00 2026-03-21T14:42:33.748 INFO:teuthology.orchestra.run.vm05.stdout:(97/138): libarrow-doc-9.0.0-15.el9.noarch.rpm 8.1 MB/s | 25 kB 00:00 2026-03-21T14:42:33.752 INFO:teuthology.orchestra.run.vm05.stdout:(98/138): liboath-2.6.12-1.el9.x86_64.rpm 15 MB/s | 49 kB 00:00 2026-03-21T14:42:33.755 INFO:teuthology.orchestra.run.vm05.stdout:(99/138): libunwind-1.6.2-1.el9.x86_64.rpm 20 MB/s | 67 kB 00:00 2026-03-21T14:42:33.767 INFO:teuthology.orchestra.run.vm05.stdout:(100/138): luarocks-3.9.2-5.el9.noarch.rpm 33 MB/s | 151 kB 00:00 2026-03-21T14:42:33.780 INFO:teuthology.orchestra.run.vm05.stdout:(101/138): parquet-libs-9.0.0-15.el9.x86_64.rpm 65 MB/s | 838 kB 00:00 2026-03-21T14:42:33.789 INFO:teuthology.orchestra.run.vm05.stdout:(102/138): python3-asyncssh-2.13.2-5.el9.noarch 60 MB/s | 548 kB 00:00 2026-03-21T14:42:33.800 INFO:teuthology.orchestra.run.vm05.stdout:(103/138): python3-autocommand-2.2.2-8.el9.noar 2.7 MB/s | 29 kB 00:00 2026-03-21T14:42:33.841 INFO:teuthology.orchestra.run.vm05.stdout:(104/138): python3-backports-tarfile-1.2.0-1.el 1.4 MB/s | 60 kB 00:00 2026-03-21T14:42:33.849 INFO:teuthology.orchestra.run.vm05.stdout:(105/138): python3-bcrypt-3.2.2-1.el9.x86_64.rp 5.4 MB/s | 43 kB 00:00 2026-03-21T14:42:33.852 INFO:teuthology.orchestra.run.vm05.stdout:(106/138): python3-cachetools-4.2.4-1.el9.noarc 12 MB/s | 32 kB 00:00 2026-03-21T14:42:33.854 INFO:teuthology.orchestra.run.vm05.stdout:(107/138): python3-certifi-2023.05.07-4.el9.noa 7.2 MB/s | 14 kB 00:00 2026-03-21T14:42:33.858 INFO:teuthology.orchestra.run.vm05.stdout:(108/138): python3-cheroot-10.0.1-4.el9.noarch. 42 MB/s | 173 kB 00:00 2026-03-21T14:42:33.864 INFO:teuthology.orchestra.run.vm05.stdout:(109/138): python3-cherrypy-18.6.1-2.el9.noarch 56 MB/s | 358 kB 00:00 2026-03-21T14:42:33.870 INFO:teuthology.orchestra.run.vm05.stdout:(110/138): python3-google-auth-2.45.0-1.el9.noa 45 MB/s | 254 kB 00:00 2026-03-21T14:42:33.886 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 51/140 2026-03-21T14:42:33.896 INFO:teuthology.orchestra.run.vm05.stdout:(111/138): python3-grpcio-1.46.7-10.el9.x86_64. 78 MB/s | 2.0 MB 00:00 2026-03-21T14:42:33.905 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 52/140 2026-03-21T14:42:33.909 INFO:teuthology.orchestra.run.vm05.stdout:(112/138): python3-grpcio-tools-1.46.7-10.el9.x 11 MB/s | 144 kB 00:00 2026-03-21T14:42:33.912 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 53/140 2026-03-21T14:42:33.912 INFO:teuthology.orchestra.run.vm05.stdout:(113/138): python3-jaraco-8.2.1-3.el9.noarch.rp 5.2 MB/s | 11 kB 00:00 2026-03-21T14:42:33.914 INFO:teuthology.orchestra.run.vm05.stdout:(114/138): python3-jaraco-classes-3.2.1-5.el9.n 8.2 MB/s | 18 kB 00:00 2026-03-21T14:42:33.916 INFO:teuthology.orchestra.run.vm05.stdout:(115/138): python3-jaraco-collections-3.0.0-8.e 11 MB/s | 23 kB 00:00 2026-03-21T14:42:33.919 INFO:teuthology.orchestra.run.vm05.stdout:(116/138): python3-jaraco-context-6.0.1-3.el9.n 7.5 MB/s | 20 kB 00:00 2026-03-21T14:42:33.920 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 54/140 2026-03-21T14:42:33.921 INFO:teuthology.orchestra.run.vm05.stdout:(117/138): python3-jaraco-functools-3.5.0-2.el9 8.9 MB/s | 19 kB 00:00 2026-03-21T14:42:33.925 INFO:teuthology.orchestra.run.vm05.stdout:(118/138): python3-jaraco-text-4.0.0-2.el9.noar 8.7 MB/s | 26 kB 00:00 2026-03-21T14:42:33.926 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 55/140 2026-03-21T14:42:33.935 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 56/140 2026-03-21T14:42:33.940 INFO:teuthology.orchestra.run.vm05.stdout:(119/138): python3-kubernetes-26.1.0-3.el9.noar 69 MB/s | 1.0 MB 00:00 2026-03-21T14:42:33.942 INFO:teuthology.orchestra.run.vm01.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 57/140 2026-03-21T14:42:33.943 INFO:teuthology.orchestra.run.vm05.stdout:(120/138): python3-more-itertools-8.12.0-2.el9. 24 MB/s | 79 kB 00:00 2026-03-21T14:42:33.944 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 58/140 2026-03-21T14:42:33.946 INFO:teuthology.orchestra.run.vm05.stdout:(121/138): python3-natsort-7.1.1-5.el9.noarch.r 22 MB/s | 58 kB 00:00 2026-03-21T14:42:33.949 INFO:teuthology.orchestra.run.vm05.stdout:(122/138): python3-portend-3.1.0-2.el9.noarch.r 5.7 MB/s | 16 kB 00:00 2026-03-21T14:42:33.953 INFO:teuthology.orchestra.run.vm05.stdout:(123/138): python3-pyOpenSSL-21.0.0-1.el9.noarc 23 MB/s | 90 kB 00:00 2026-03-21T14:42:33.957 INFO:teuthology.orchestra.run.vm05.stdout:(124/138): python3-repoze-lru-0.7-16.el9.noarch 8.1 MB/s | 31 kB 00:00 2026-03-21T14:42:33.963 INFO:teuthology.orchestra.run.vm05.stdout:(125/138): python3-routes-2.5.1-5.el9.noarch.rp 33 MB/s | 188 kB 00:00 2026-03-21T14:42:33.967 INFO:teuthology.orchestra.run.vm05.stdout:(126/138): python3-rsa-4.9-2.el9.noarch.rpm 15 MB/s | 59 kB 00:00 2026-03-21T14:42:33.969 INFO:teuthology.orchestra.run.vm05.stdout:(127/138): python3-tempora-5.0.0-2.el9.noarch.r 16 MB/s | 36 kB 00:00 2026-03-21T14:42:33.973 INFO:teuthology.orchestra.run.vm05.stdout:(128/138): python3-typing-extensions-4.15.0-1.e 24 MB/s | 86 kB 00:00 2026-03-21T14:42:33.974 INFO:teuthology.orchestra.run.vm01.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 59/140 2026-03-21T14:42:33.978 INFO:teuthology.orchestra.run.vm05.stdout:(129/138): python3-websocket-client-1.2.3-2.el9 18 MB/s | 90 kB 00:00 2026-03-21T14:42:33.980 INFO:teuthology.orchestra.run.vm05.stdout:(130/138): python3-xmltodict-0.12.0-15.el9.noar 9.9 MB/s | 22 kB 00:00 2026-03-21T14:42:33.982 INFO:teuthology.orchestra.run.vm05.stdout:(131/138): python3-zc-lockfile-2.0-10.el9.noarc 9.8 MB/s | 20 kB 00:00 2026-03-21T14:42:33.987 INFO:teuthology.orchestra.run.vm05.stdout:(132/138): re2-20211101-20.el9.x86_64.rpm 44 MB/s | 191 kB 00:00 2026-03-21T14:42:33.992 INFO:teuthology.orchestra.run.vm05.stdout:(133/138): s3cmd-2.4.0-1.el9.noarch.rpm 37 MB/s | 206 kB 00:00 2026-03-21T14:42:34.010 INFO:teuthology.orchestra.run.vm05.stdout:(134/138): lua-devel-5.4.4-4.el9.x86_64.rpm 21 kB/s | 22 kB 00:01 2026-03-21T14:42:34.027 INFO:teuthology.orchestra.run.vm01.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 60/140 2026-03-21T14:42:34.034 INFO:teuthology.orchestra.run.vm05.stdout:(135/138): thrift-0.15.0-4.el9.x86_64.rpm 38 MB/s | 1.6 MB 00:00 2026-03-21T14:42:34.042 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 61/140 2026-03-21T14:42:34.049 INFO:teuthology.orchestra.run.vm01.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 62/140 2026-03-21T14:42:34.056 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 63/140 2026-03-21T14:42:34.064 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 64/140 2026-03-21T14:42:34.070 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 65/140 2026-03-21T14:42:34.080 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 66/140 2026-03-21T14:42:34.086 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 67/140 2026-03-21T14:42:34.122 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 68/140 2026-03-21T14:42:34.136 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 69/140 2026-03-21T14:42:34.146 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 70/140 2026-03-21T14:42:34.156 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 71/140 2026-03-21T14:42:34.203 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 72/140 2026-03-21T14:42:34.318 INFO:teuthology.orchestra.run.vm05.stdout:(136/138): protobuf-compiler-3.14.0-17.el9.x86_ 1.2 MB/s | 862 kB 00:00 2026-03-21T14:42:34.496 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 73/140 2026-03-21T14:42:34.529 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 74/140 2026-03-21T14:42:34.534 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-21T14:42:34.538 INFO:teuthology.orchestra.run.vm01.stdout: Installing : perl-Benchmark-1.23-483.el9.noarch 76/140 2026-03-21T14:42:34.604 INFO:teuthology.orchestra.run.vm01.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/140 2026-03-21T14:42:34.607 INFO:teuthology.orchestra.run.vm01.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/140 2026-03-21T14:42:34.632 INFO:teuthology.orchestra.run.vm01.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/140 2026-03-21T14:42:35.042 INFO:teuthology.orchestra.run.vm05.stdout:(137/138): librados2-20.2.0-712.g70f8415b.el9.x 3.4 MB/s | 3.5 MB 00:01 2026-03-21T14:42:35.049 INFO:teuthology.orchestra.run.vm01.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/140 2026-03-21T14:42:35.115 INFO:teuthology.orchestra.run.vm05.stdout:(138/138): librbd1-20.2.0-712.g70f8415b.el9.x86 2.6 MB/s | 2.8 MB 00:01 2026-03-21T14:42:35.119 INFO:teuthology.orchestra.run.vm05.stdout:-------------------------------------------------------------------------------- 2026-03-21T14:42:35.119 INFO:teuthology.orchestra.run.vm05.stdout:Total 19 MB/s | 267 MB 00:13 2026-03-21T14:42:35.147 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/140 2026-03-21T14:42:35.744 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction check 2026-03-21T14:42:35.811 INFO:teuthology.orchestra.run.vm05.stdout:Transaction check succeeded. 2026-03-21T14:42:35.811 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction test 2026-03-21T14:42:36.016 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/140 2026-03-21T14:42:36.045 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/140 2026-03-21T14:42:36.053 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/140 2026-03-21T14:42:36.057 INFO:teuthology.orchestra.run.vm01.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/140 2026-03-21T14:42:36.065 INFO:teuthology.orchestra.run.vm01.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 86/140 2026-03-21T14:42:36.392 INFO:teuthology.orchestra.run.vm01.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 87/140 2026-03-21T14:42:36.395 INFO:teuthology.orchestra.run.vm01.stdout: Installing : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-21T14:42:36.420 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-21T14:42:36.423 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 89/140 2026-03-21T14:42:36.894 INFO:teuthology.orchestra.run.vm05.stdout:Transaction test succeeded. 2026-03-21T14:42:36.894 INFO:teuthology.orchestra.run.vm05.stdout:Running transaction 2026-03-21T14:42:37.713 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-21T14:42:37.761 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-21T14:42:37.782 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-21T14:42:37.798 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 91/140 2026-03-21T14:42:37.808 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-packaging-20.9-5.el9.noarch 92/140 2026-03-21T14:42:37.826 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-ply-3.11-14.el9.noarch 93/140 2026-03-21T14:42:37.848 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 94/140 2026-03-21T14:42:37.949 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 95/140 2026-03-21T14:42:37.966 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 96/140 2026-03-21T14:42:37.997 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 97/140 2026-03-21T14:42:38.038 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 98/140 2026-03-21T14:42:38.062 INFO:teuthology.orchestra.run.vm05.stdout: Preparing : 1/1 2026-03-21T14:42:38.071 INFO:teuthology.orchestra.run.vm05.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/140 2026-03-21T14:42:38.075 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/140 2026-03-21T14:42:38.089 INFO:teuthology.orchestra.run.vm05.stdout: Installing : liboath-2.6.12-1.el9.x86_64 3/140 2026-03-21T14:42:38.106 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 99/140 2026-03-21T14:42:38.122 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 100/140 2026-03-21T14:42:38.130 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 101/140 2026-03-21T14:42:38.136 INFO:teuthology.orchestra.run.vm01.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 102/140 2026-03-21T14:42:38.141 INFO:teuthology.orchestra.run.vm01.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 103/140 2026-03-21T14:42:38.144 INFO:teuthology.orchestra.run.vm01.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-21T14:42:38.166 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-21T14:42:38.281 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 4/140 2026-03-21T14:42:38.283 INFO:teuthology.orchestra.run.vm05.stdout: Upgrading : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-21T14:42:38.318 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-21T14:42:38.329 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-21T14:42:38.334 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/140 2026-03-21T14:42:38.338 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/140 2026-03-21T14:42:38.341 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 9/140 2026-03-21T14:42:38.350 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 10/140 2026-03-21T14:42:38.499 INFO:teuthology.orchestra.run.vm01.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 105/140 2026-03-21T14:42:38.506 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-21T14:42:38.552 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/140 2026-03-21T14:42:38.556 INFO:teuthology.orchestra.run.vm05.stdout: Upgrading : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-21T14:42:38.558 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-21T14:42:38.558 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-21T14:42:38.558 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-21T14:42:38.558 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:38.564 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-21T14:42:38.578 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-21T14:42:38.580 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-21T14:42:38.609 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-21T14:42:38.611 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-21T14:42:38.628 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-21T14:42:38.662 INFO:teuthology.orchestra.run.vm05.stdout: Installing : re2-1:20211101-20.el9.x86_64 15/140 2026-03-21T14:42:38.689 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 16/140 2026-03-21T14:42:38.701 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/140 2026-03-21T14:42:38.708 INFO:teuthology.orchestra.run.vm05.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 18/140 2026-03-21T14:42:38.712 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lua-5.4.4-4.el9.x86_64 19/140 2026-03-21T14:42:38.718 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 20/140 2026-03-21T14:42:38.749 INFO:teuthology.orchestra.run.vm05.stdout: Installing : unzip-6.0-59.el9.x86_64 21/140 2026-03-21T14:42:38.768 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 22/140 2026-03-21T14:42:38.774 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 23/140 2026-03-21T14:42:38.782 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 24/140 2026-03-21T14:42:38.785 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 25/140 2026-03-21T14:42:38.829 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 26/140 2026-03-21T14:42:38.836 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 27/140 2026-03-21T14:42:38.838 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 28/140 2026-03-21T14:42:38.839 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-21T14:42:38.895 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-21T14:42:38.897 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-21T14:42:38.921 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-21T14:42:38.938 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 31/140 2026-03-21T14:42:38.946 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/140 2026-03-21T14:42:38.980 INFO:teuthology.orchestra.run.vm05.stdout: Installing : zip-3.0-35.el9.x86_64 33/140 2026-03-21T14:42:38.987 INFO:teuthology.orchestra.run.vm05.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/140 2026-03-21T14:42:38.998 INFO:teuthology.orchestra.run.vm05.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/140 2026-03-21T14:42:39.074 INFO:teuthology.orchestra.run.vm05.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/140 2026-03-21T14:42:39.094 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 37/140 2026-03-21T14:42:39.116 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rsa-4.9-2.el9.noarch 38/140 2026-03-21T14:42:39.123 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 39/140 2026-03-21T14:42:39.133 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/140 2026-03-21T14:42:39.140 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 41/140 2026-03-21T14:42:39.145 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/140 2026-03-21T14:42:39.164 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/140 2026-03-21T14:42:39.172 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/140 2026-03-21T14:42:39.183 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 45/140 2026-03-21T14:42:39.200 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 46/140 2026-03-21T14:42:39.215 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 47/140 2026-03-21T14:42:39.222 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 48/140 2026-03-21T14:42:39.232 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 49/140 2026-03-21T14:42:39.289 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 50/140 2026-03-21T14:42:39.697 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 51/140 2026-03-21T14:42:39.713 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 52/140 2026-03-21T14:42:39.719 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 53/140 2026-03-21T14:42:39.727 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 54/140 2026-03-21T14:42:39.732 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 55/140 2026-03-21T14:42:39.740 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 56/140 2026-03-21T14:42:39.744 INFO:teuthology.orchestra.run.vm05.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 57/140 2026-03-21T14:42:39.746 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 58/140 2026-03-21T14:42:39.781 INFO:teuthology.orchestra.run.vm05.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 59/140 2026-03-21T14:42:39.839 INFO:teuthology.orchestra.run.vm05.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 60/140 2026-03-21T14:42:39.853 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 61/140 2026-03-21T14:42:39.862 INFO:teuthology.orchestra.run.vm05.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 62/140 2026-03-21T14:42:39.869 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 63/140 2026-03-21T14:42:39.876 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 64/140 2026-03-21T14:42:39.883 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 65/140 2026-03-21T14:42:39.892 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 66/140 2026-03-21T14:42:39.900 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 67/140 2026-03-21T14:42:39.939 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 68/140 2026-03-21T14:42:39.954 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 69/140 2026-03-21T14:42:39.962 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 70/140 2026-03-21T14:42:39.974 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 71/140 2026-03-21T14:42:40.027 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 72/140 2026-03-21T14:42:40.320 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 73/140 2026-03-21T14:42:40.353 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 74/140 2026-03-21T14:42:40.358 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-21T14:42:40.362 INFO:teuthology.orchestra.run.vm05.stdout: Installing : perl-Benchmark-1.23-483.el9.noarch 76/140 2026-03-21T14:42:40.432 INFO:teuthology.orchestra.run.vm05.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/140 2026-03-21T14:42:40.434 INFO:teuthology.orchestra.run.vm05.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/140 2026-03-21T14:42:40.463 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/140 2026-03-21T14:42:40.890 INFO:teuthology.orchestra.run.vm05.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/140 2026-03-21T14:42:40.982 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/140 2026-03-21T14:42:41.859 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/140 2026-03-21T14:42:41.887 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/140 2026-03-21T14:42:41.893 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/140 2026-03-21T14:42:41.897 INFO:teuthology.orchestra.run.vm05.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/140 2026-03-21T14:42:41.904 INFO:teuthology.orchestra.run.vm05.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 86/140 2026-03-21T14:42:42.231 INFO:teuthology.orchestra.run.vm05.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 87/140 2026-03-21T14:42:42.233 INFO:teuthology.orchestra.run.vm05.stdout: Installing : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-21T14:42:42.257 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-21T14:42:42.259 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 89/140 2026-03-21T14:42:43.558 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-21T14:42:43.567 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-21T14:42:43.588 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-21T14:42:43.602 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 91/140 2026-03-21T14:42:43.612 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-packaging-20.9-5.el9.noarch 92/140 2026-03-21T14:42:43.631 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-ply-3.11-14.el9.noarch 93/140 2026-03-21T14:42:43.654 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 94/140 2026-03-21T14:42:43.764 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 95/140 2026-03-21T14:42:43.782 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 96/140 2026-03-21T14:42:43.815 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 97/140 2026-03-21T14:42:43.865 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 98/140 2026-03-21T14:42:43.941 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 99/140 2026-03-21T14:42:43.955 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 100/140 2026-03-21T14:42:43.963 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 101/140 2026-03-21T14:42:43.971 INFO:teuthology.orchestra.run.vm05.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 102/140 2026-03-21T14:42:43.977 INFO:teuthology.orchestra.run.vm05.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 103/140 2026-03-21T14:42:43.982 INFO:teuthology.orchestra.run.vm05.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-21T14:42:43.999 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-21T14:42:44.355 INFO:teuthology.orchestra.run.vm05.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 105/140 2026-03-21T14:42:44.435 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-21T14:42:44.574 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-21T14:42:44.574 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-21T14:42:44.574 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-21T14:42:44.574 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:44.621 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-21T14:42:45.394 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-21T14:42:45.394 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /sys 2026-03-21T14:42:45.394 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /proc 2026-03-21T14:42:45.394 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /mnt 2026-03-21T14:42:45.394 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /var/tmp 2026-03-21T14:42:45.394 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /home 2026-03-21T14:42:45.394 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /root 2026-03-21T14:42:45.394 INFO:teuthology.orchestra.run.vm01.stdout:skipping the directory /tmp 2026-03-21T14:42:45.394 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:45.528 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-21T14:42:45.558 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-21T14:42:45.558 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:45.558 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-21T14:42:45.558 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-21T14:42:45.559 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-21T14:42:45.559 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:45.838 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-21T14:42:45.865 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-21T14:42:45.865 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:45.865 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-21T14:42:45.865 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-21T14:42:45.865 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-21T14:42:45.865 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:45.876 INFO:teuthology.orchestra.run.vm01.stdout: Installing : mailcap-2.1.49-5.el9.noarch 110/140 2026-03-21T14:42:45.879 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 111/140 2026-03-21T14:42:45.903 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-21T14:42:45.903 INFO:teuthology.orchestra.run.vm01.stdout:Creating group 'qat' with GID 994. 2026-03-21T14:42:45.903 INFO:teuthology.orchestra.run.vm01.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-21T14:42:45.903 INFO:teuthology.orchestra.run.vm01.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-21T14:42:45.903 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:45.915 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-21T14:42:45.948 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-21T14:42:45.948 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-21T14:42:45.948 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:45.975 INFO:teuthology.orchestra.run.vm01.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 113/140 2026-03-21T14:42:46.008 INFO:teuthology.orchestra.run.vm01.stdout: Installing : fuse-2.9.9-17.el9.x86_64 114/140 2026-03-21T14:42:46.086 INFO:teuthology.orchestra.run.vm01.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/140 2026-03-21T14:42:46.092 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-21T14:42:46.107 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-21T14:42:46.107 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:46.107 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-21T14:42:46.107 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:46.911 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-21T14:42:46.939 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-21T14:42:46.939 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:46.939 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-21T14:42:46.939 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-21T14:42:46.939 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-21T14:42:46.939 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:47.018 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-21T14:42:47.022 INFO:teuthology.orchestra.run.vm01.stdout: Installing : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-21T14:42:47.031 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 119/140 2026-03-21T14:42:47.061 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 120/140 2026-03-21T14:42:47.065 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-21T14:42:48.459 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-21T14:42:48.470 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-21T14:42:49.032 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-21T14:42:49.035 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-21T14:42:49.101 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-21T14:42:49.151 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 124/140 2026-03-21T14:42:49.154 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-21T14:42:49.180 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-21T14:42:49.180 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:49.180 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-21T14:42:49.180 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-21T14:42:49.180 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-21T14:42:49.180 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:49.195 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-21T14:42:49.208 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-21T14:42:49.256 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 127/140 2026-03-21T14:42:50.455 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 128/140 2026-03-21T14:42:50.545 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-21T14:42:50.569 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-21T14:42:50.569 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:50.569 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-21T14:42:50.569 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-21T14:42:50.569 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-21T14:42:50.569 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:50.599 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-21T14:42:50.622 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-21T14:42:50.622 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:50.622 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-21T14:42:50.622 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:50.820 INFO:teuthology.orchestra.run.vm01.stdout: Installing : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-21T14:42:50.843 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-21T14:42:50.843 INFO:teuthology.orchestra.run.vm01.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:50.843 INFO:teuthology.orchestra.run.vm01.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-21T14:42:50.843 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-21T14:42:50.843 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-21T14:42:50.843 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:51.316 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-21T14:42:51.316 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /sys 2026-03-21T14:42:51.316 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /proc 2026-03-21T14:42:51.316 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /mnt 2026-03-21T14:42:51.316 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /var/tmp 2026-03-21T14:42:51.316 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /home 2026-03-21T14:42:51.316 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /root 2026-03-21T14:42:51.316 INFO:teuthology.orchestra.run.vm05.stdout:skipping the directory /tmp 2026-03-21T14:42:51.316 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:51.447 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-21T14:42:51.471 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-21T14:42:51.471 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:51.471 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-21T14:42:51.472 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-21T14:42:51.472 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-21T14:42:51.472 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:51.735 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-21T14:42:51.760 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-21T14:42:51.761 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:51.761 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-21T14:42:51.761 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-21T14:42:51.761 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-21T14:42:51.761 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:51.770 INFO:teuthology.orchestra.run.vm05.stdout: Installing : mailcap-2.1.49-5.el9.noarch 110/140 2026-03-21T14:42:51.773 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 111/140 2026-03-21T14:42:51.811 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-21T14:42:51.812 INFO:teuthology.orchestra.run.vm05.stdout:Creating group 'qat' with GID 994. 2026-03-21T14:42:51.812 INFO:teuthology.orchestra.run.vm05.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-21T14:42:51.812 INFO:teuthology.orchestra.run.vm05.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-21T14:42:51.812 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:51.822 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-21T14:42:51.850 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-21T14:42:51.850 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-21T14:42:51.850 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:51.873 INFO:teuthology.orchestra.run.vm05.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 113/140 2026-03-21T14:42:51.903 INFO:teuthology.orchestra.run.vm05.stdout: Installing : fuse-2.9.9-17.el9.x86_64 114/140 2026-03-21T14:42:51.985 INFO:teuthology.orchestra.run.vm05.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/140 2026-03-21T14:42:51.992 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-21T14:42:52.008 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-21T14:42:52.009 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:52.009 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-21T14:42:52.009 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:52.872 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-21T14:42:52.901 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-21T14:42:52.901 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:52.901 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-21T14:42:52.901 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-21T14:42:52.901 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-21T14:42:52.901 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:53.026 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-21T14:42:53.030 INFO:teuthology.orchestra.run.vm05.stdout: Installing : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-21T14:42:53.039 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 119/140 2026-03-21T14:42:53.068 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 120/140 2026-03-21T14:42:53.071 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-21T14:42:54.451 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-21T14:42:54.465 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-21T14:42:55.025 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-21T14:42:55.028 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-21T14:42:55.096 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-21T14:42:55.153 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 124/140 2026-03-21T14:42:55.156 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-21T14:42:55.180 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-21T14:42:55.180 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:55.180 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-21T14:42:55.180 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-21T14:42:55.180 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-21T14:42:55.180 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:55.196 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-21T14:42:55.208 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-21T14:42:55.256 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 127/140 2026-03-21T14:42:55.385 INFO:teuthology.orchestra.run.vm01.stdout: Installing : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 132/140 2026-03-21T14:42:55.392 INFO:teuthology.orchestra.run.vm01.stdout: Installing : perl-Test-Harness-1:3.42-461.el9.noarch 133/140 2026-03-21T14:42:55.399 INFO:teuthology.orchestra.run.vm01.stdout: Installing : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 134/140 2026-03-21T14:42:55.411 INFO:teuthology.orchestra.run.vm01.stdout: Installing : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 135/140 2026-03-21T14:42:55.431 INFO:teuthology.orchestra.run.vm01.stdout: Installing : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 136/140 2026-03-21T14:42:55.439 INFO:teuthology.orchestra.run.vm01.stdout: Installing : s3cmd-2.4.0-1.el9.noarch 137/140 2026-03-21T14:42:55.444 INFO:teuthology.orchestra.run.vm01.stdout: Installing : bzip2-1.0.8-11.el9.x86_64 138/140 2026-03-21T14:42:55.444 INFO:teuthology.orchestra.run.vm01.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-21T14:42:55.462 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-21T14:42:55.462 INFO:teuthology.orchestra.run.vm01.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-21T14:42:56.460 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 128/140 2026-03-21T14:42:56.662 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-21T14:42:56.681 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-21T14:42:56.682 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:56.682 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-21T14:42:56.682 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-21T14:42:56.682 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-21T14:42:56.682 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:56.977 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-21T14:42:56.998 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-21T14:42:56.998 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:56.998 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-21T14:42:56.998 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:57.300 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-21T14:42:57.324 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-21T14:42:57.324 INFO:teuthology.orchestra.run.vm05.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-21T14:42:57.324 INFO:teuthology.orchestra.run.vm05.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-21T14:42:57.324 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-21T14:42:57.324 INFO:teuthology.orchestra.run.vm05.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-21T14:42:57.324 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:42:57.446 INFO:teuthology.orchestra.run.vm01.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-21T14:42:57.446 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/140 2026-03-21T14:42:57.446 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2/140 2026-03-21T14:42:57.446 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 3/140 2026-03-21T14:42:57.446 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 4/140 2026-03-21T14:42:57.446 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-712.g70f841 5/140 2026-03-21T14:42:57.446 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-21T14:42:57.446 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 7/140 2026-03-21T14:42:57.446 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/140 2026-03-21T14:42:57.446 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 9/140 2026-03-21T14:42:57.446 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 10/140 2026-03-21T14:42:57.446 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 11/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 13/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 15/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 16/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 17/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 18/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 19/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 20/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 21/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 22/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 23/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 24/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 25/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 26/140 2026-03-21T14:42:57.447 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 27/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 28/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 29/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 30/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 31/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 32/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 33/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 34/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 35/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 36/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 37/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : bzip2-1.0.8-11.el9.x86_64 38/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 39/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 40/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 41/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 42/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 43/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 44/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 45/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 46/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 47/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-ply-3.11-14.el9.noarch 49/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 50/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 51/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 52/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 53/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : unzip-6.0-59.el9.x86_64 54/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : zip-3.0-35.el9.x86_64 55/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 56/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 57/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 58/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 59/140 2026-03-21T14:42:57.448 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 60/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 61/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 62/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 63/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 64/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 65/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 66/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-5.4.4-4.el9.x86_64 67/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 68/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 69/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : perl-Benchmark-1.23-483.el9.noarch 70/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : perl-Test-Harness-1:3.42-461.el9.noarch 71/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 72/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 73/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 74/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 76/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 77/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 78/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 79/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 80/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 81/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 82/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 83/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 84/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 85/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 86/140 2026-03-21T14:42:57.449 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 87/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 88/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 89/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 90/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 91/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 92/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 93/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 94/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 95/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 96/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 97/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 98/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 99/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 100/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 101/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 102/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 103/140 2026-03-21T14:42:57.450 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 104/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 105/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 106/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 107/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 108/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 109/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 110/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 111/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 112/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 113/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 114/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 115/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 116/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 117/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 118/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 119/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 120/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 121/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 124/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 125/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 126/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 127/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 128/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 129/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 130/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 131/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 132/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 133/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : re2-1:20211101-20.el9.x86_64 134/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : s3cmd-2.4.0-1.el9.noarch 135/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 136/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 137/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 138/140 2026-03-21T14:42:57.451 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 139/140 2026-03-21T14:42:57.574 INFO:teuthology.orchestra.run.vm01.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 140/140 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout:Upgraded: 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout:Installed: 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: bzip2-1.0.8-11.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: fuse-2.9.9-17.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-21T14:42:57.575 INFO:teuthology.orchestra.run.vm01.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: lua-5.4.4-4.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: perl-Benchmark-1.23-483.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: perl-Test-Harness-1:3.42-461.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-ply-3.11-14.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-21T14:42:57.576 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-pyparsing-2.4.7-9.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: re2-1:20211101-20.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: s3cmd-2.4.0-1.el9.noarch 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: unzip-6.0-59.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: zip-3.0-35.el9.x86_64 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:42:57.577 INFO:teuthology.orchestra.run.vm01.stdout:Complete! 2026-03-21T14:42:57.682 DEBUG:teuthology.parallel:result is None 2026-03-21T14:43:01.807 INFO:teuthology.orchestra.run.vm05.stdout: Installing : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 132/140 2026-03-21T14:43:01.814 INFO:teuthology.orchestra.run.vm05.stdout: Installing : perl-Test-Harness-1:3.42-461.el9.noarch 133/140 2026-03-21T14:43:01.822 INFO:teuthology.orchestra.run.vm05.stdout: Installing : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 134/140 2026-03-21T14:43:01.835 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 135/140 2026-03-21T14:43:01.855 INFO:teuthology.orchestra.run.vm05.stdout: Installing : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 136/140 2026-03-21T14:43:01.864 INFO:teuthology.orchestra.run.vm05.stdout: Installing : s3cmd-2.4.0-1.el9.noarch 137/140 2026-03-21T14:43:01.868 INFO:teuthology.orchestra.run.vm05.stdout: Installing : bzip2-1.0.8-11.el9.x86_64 138/140 2026-03-21T14:43:01.869 INFO:teuthology.orchestra.run.vm05.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-21T14:43:01.885 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-21T14:43:01.885 INFO:teuthology.orchestra.run.vm05.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-21T14:43:03.695 INFO:teuthology.orchestra.run.vm05.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-21T14:43:03.695 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/140 2026-03-21T14:43:03.695 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2/140 2026-03-21T14:43:03.695 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 3/140 2026-03-21T14:43:03.695 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 4/140 2026-03-21T14:43:03.695 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-712.g70f841 5/140 2026-03-21T14:43:03.695 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 7/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 9/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 10/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 11/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 13/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 15/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 16/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 17/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 18/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 19/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 20/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 21/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 22/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 23/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 24/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 25/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 26/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 27/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 28/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 29/140 2026-03-21T14:43:03.696 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 30/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 31/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 32/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 33/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 34/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 35/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 36/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 37/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : bzip2-1.0.8-11.el9.x86_64 38/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 39/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 40/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 41/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 42/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 43/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 44/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 45/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 46/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 47/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-ply-3.11-14.el9.noarch 49/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 50/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 51/140 2026-03-21T14:43:03.697 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 52/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 53/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : unzip-6.0-59.el9.x86_64 54/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : zip-3.0-35.el9.x86_64 55/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 56/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 57/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 58/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 59/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 60/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 61/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 62/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 63/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 64/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 65/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 66/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-5.4.4-4.el9.x86_64 67/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 68/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 69/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : perl-Benchmark-1.23-483.el9.noarch 70/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : perl-Test-Harness-1:3.42-461.el9.noarch 71/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 72/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 73/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 74/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 76/140 2026-03-21T14:43:03.698 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 77/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 78/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 79/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 80/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 81/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 82/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 83/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 84/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 85/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 86/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 87/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 88/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 89/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 90/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 91/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 92/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 93/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 94/140 2026-03-21T14:43:03.699 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 95/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 96/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 97/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 98/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 99/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 100/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 101/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 102/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 103/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 104/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 105/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 106/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 107/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 108/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 109/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 110/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 111/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 112/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 113/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 114/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 115/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 116/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 117/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 118/140 2026-03-21T14:43:03.700 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 119/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 120/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 121/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 124/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 125/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 126/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 127/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 128/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 129/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 130/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 131/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 132/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 133/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : re2-1:20211101-20.el9.x86_64 134/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : s3cmd-2.4.0-1.el9.noarch 135/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 136/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 137/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 138/140 2026-03-21T14:43:03.701 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 139/140 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 140/140 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout:Upgraded: 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout:Installed: 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: bzip2-1.0.8-11.el9.x86_64 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: ceph-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:43:03.805 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: fuse-2.9.9-17.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: lua-5.4.4-4.el9.x86_64 2026-03-21T14:43:03.806 INFO:teuthology.orchestra.run.vm05.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: perl-Benchmark-1.23-483.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: perl-Test-Harness-1:3.42-461.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-ply-3.11-14.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-21T14:43:03.807 INFO:teuthology.orchestra.run.vm05.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-pyparsing-2.4.7-9.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: re2-1:20211101-20.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: s3cmd-2.4.0-1.el9.noarch 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: unzip-6.0-59.el9.x86_64 2026-03-21T14:43:03.808 INFO:teuthology.orchestra.run.vm05.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-21T14:43:03.809 INFO:teuthology.orchestra.run.vm05.stdout: zip-3.0-35.el9.x86_64 2026-03-21T14:43:03.809 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:03.809 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-21T14:43:03.922 DEBUG:teuthology.parallel:result is None 2026-03-21T14:43:03.922 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-21T14:43:04.610 DEBUG:teuthology.orchestra.run.vm01:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-21T14:43:04.629 INFO:teuthology.orchestra.run.vm01.stdout:20.2.0-712.g70f8415b.el9 2026-03-21T14:43:04.630 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-712.g70f8415b.el9 2026-03-21T14:43:04.630 INFO:teuthology.task.install:The correct ceph version 20.2.0-712.g70f8415b is installed. 2026-03-21T14:43:04.631 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-21T14:43:05.279 DEBUG:teuthology.orchestra.run.vm05:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-21T14:43:05.300 INFO:teuthology.orchestra.run.vm05.stdout:20.2.0-712.g70f8415b.el9 2026-03-21T14:43:05.300 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-712.g70f8415b.el9 2026-03-21T14:43:05.300 INFO:teuthology.task.install:The correct ceph version 20.2.0-712.g70f8415b is installed. 2026-03-21T14:43:05.301 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-21T14:43:05.301 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:05.301 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-21T14:43:05.327 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:43:05.327 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-21T14:43:05.369 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-21T14:43:05.369 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:05.369 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/daemon-helper 2026-03-21T14:43:05.393 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-21T14:43:05.457 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:43:05.457 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/daemon-helper 2026-03-21T14:43:05.482 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-21T14:43:05.547 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-21T14:43:05.548 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:05.548 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-21T14:43:05.573 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-21T14:43:05.638 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:43:05.638 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-21T14:43:05.664 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-21T14:43:05.728 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-21T14:43:05.728 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:05.728 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/stdin-killer 2026-03-21T14:43:05.755 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-21T14:43:05.821 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:43:05.821 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/usr/bin/stdin-killer 2026-03-21T14:43:05.845 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-21T14:43:05.907 INFO:teuthology.run_tasks:Running task ceph... 2026-03-21T14:43:05.952 INFO:tasks.ceph:Making ceph log dir writeable by non-root... 2026-03-21T14:43:05.952 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 777 /var/log/ceph 2026-03-21T14:43:05.954 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 777 /var/log/ceph 2026-03-21T14:43:05.978 INFO:tasks.ceph:Disabling ceph logrotate... 2026-03-21T14:43:05.978 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-21T14:43:06.020 DEBUG:teuthology.orchestra.run.vm05:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-21T14:43:06.046 INFO:tasks.ceph:Creating extra log directories... 2026-03-21T14:43:06.046 DEBUG:teuthology.orchestra.run.vm01:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-21T14:43:06.088 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-21T14:43:06.121 INFO:tasks.ceph:Creating ceph cluster ceph... 2026-03-21T14:43:06.121 INFO:tasks.ceph:config {'conf': {'client': {'rbd_persistent_cache_mode': 'rwl', 'rbd_persistent_cache_path': '/home/ubuntu/cephtest/rbd-pwl-cache', 'rbd_persistent_cache_size': 1073741824, 'rbd_plugins': 'pwl_cache'}, 'global': {'mon warn on pool no app': False}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'fs': 'xfs', 'mkfs_options': None, 'mount_options': None, 'skip_mgr_daemons': False, 'log_ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)'], 'cpu_profile': set(), 'cluster': 'ceph', 'mon_bind_msgr2': True, 'mon_bind_addrvec': True} 2026-03-21T14:43:06.121 INFO:tasks.ceph:ctx.config {'archive_path': '/archive/kyr-2026-03-20_22:04:26-rbd-tentacle-none-default-vps/3489', 'branch': 'tentacle', 'description': 'rbd/pwl-cache/tmpfs/{1-base/install 2-cluster/{fix-2} 3-supported-random-distro$/{centos_latest} 4-cache-path 5-cache-mode/rwl 6-cache-size/1G 7-workloads/qemu_xfstests conf/{disable-pool-app}}', 'email': None, 'first_in_suite': False, 'flavor': 'default', 'job_id': '3489', 'last_in_suite': False, 'machine_type': 'vps', 'name': 'kyr-2026-03-20_22:04:26-rbd-tentacle-none-default-vps', 'no_nested_subset': False, 'os_type': 'centos', 'os_version': '9.stream', 'overrides': {'admin_socket': {'branch': 'tentacle'}, 'ansible.cephlab': {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}}, 'ceph': {'conf': {'client': {'rbd_persistent_cache_mode': 'rwl', 'rbd_persistent_cache_path': '/home/ubuntu/cephtest/rbd-pwl-cache', 'rbd_persistent_cache_size': 1073741824, 'rbd_plugins': 'pwl_cache'}, 'global': {'mon warn on pool no app': False}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)'], 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'ceph-deploy': {'conf': {'client': {'log file': '/var/log/ceph/ceph-$name.$pid.log'}, 'mon': {}}}, 'cephadm': {'cephadm_binary_url': 'https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm'}, 'install': {'ceph': {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}}, 'workunit': {'branch': 'tt-tentacle', 'sha1': '0392f78529848ec72469e8e431875cb98d3a5fb4'}}, 'owner': 'kyr', 'priority': 1000, 'repo': 'https://github.com/ceph/ceph.git', 'roles': [['mon.a', 'mgr.x', 'osd.0', 'osd.1'], ['mon.b', 'mgr.y', 'osd.2', 'osd.3', 'client.0']], 'seed': 3051, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'sleep_before_teardown': 0, 'subset': '1/128', 'suite': 'rbd', 'suite_branch': 'tt-tentacle', 'suite_path': '/home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa', 'suite_relpath': 'qa', 'suite_repo': 'https://github.com/kshtsk/ceph.git', 'suite_sha1': '0392f78529848ec72469e8e431875cb98d3a5fb4', 'targets': {'vm01.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNIpSOAdY3T/6haAG7o4rDiTr6BfJep0HvSksZFOuR7MI7ZX0rp3SzA5gfwanXw34+aFwPB6p6/tRK3WSG1ovFI=', 'vm05.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEjUrV+jd2i3MWkce3otNYg7MpL/Pjsf6jQNdtK3cafD2PjuVy4AubknZDhcbgsCrw92RlW1qrWhKP65TZno3LE='}, 'tasks': [{'internal.check_packages': None}, {'internal.buildpackages_prep': None}, {'internal.save_config': None}, {'internal.check_lock': None}, {'internal.add_remotes': None}, {'console_log': None}, {'internal.connect': None}, {'internal.push_inventory': None}, {'internal.serialize_remote_roles': None}, {'internal.check_conflict': None}, {'internal.check_ceph_data': None}, {'internal.vm_setup': None}, {'internal.base': None}, {'internal.archive_upload': None}, {'internal.archive': None}, {'internal.coredump': None}, {'internal.sudo': None}, {'internal.syslog': None}, {'internal.timer': None}, {'pcp': None}, {'selinux': None}, {'ansible.cephlab': None}, {'clock': None}, {'install': None}, {'ceph': None}, {'exec': {'client.0': ['mkdir /home/ubuntu/cephtest/tmpfs', 'mkdir /home/ubuntu/cephtest/rbd-pwl-cache', 'sudo mount -t tmpfs -o size=20G tmpfs /home/ubuntu/cephtest/tmpfs', 'truncate -s 20G /home/ubuntu/cephtest/tmpfs/loopfile', 'mkfs.ext4 /home/ubuntu/cephtest/tmpfs/loopfile', 'sudo mount -o loop /home/ubuntu/cephtest/tmpfs/loopfile /home/ubuntu/cephtest/rbd-pwl-cache', 'sudo chmod 777 /home/ubuntu/cephtest/rbd-pwl-cache']}}, {'exec_on_cleanup': {'client.0': ['sudo umount /home/ubuntu/cephtest/rbd-pwl-cache', 'sudo umount /home/ubuntu/cephtest/tmpfs', 'rm -rf /home/ubuntu/cephtest/rbd-pwl-cache', 'rm -rf /home/ubuntu/cephtest/tmpfs']}}, {'qemu': {'client.0': {'cpus': 4, 'disks': 3, 'memory': 4096, 'test': 'qa/run_xfstests_qemu.sh', 'type': 'block'}}}], 'teuthology': {'fragments_dropped': [], 'meta': {}, 'postmerge': []}, 'teuthology_branch': 'clyso-debian-13', 'teuthology_repo': 'https://github.com/clyso/teuthology', 'teuthology_sha1': '1c580df7a9c7c2aadc272da296344fd99f27c444', 'timestamp': '2026-03-20_22:04:26', 'tube': 'vps', 'user': 'kyr', 'verbose': False, 'worker_log': '/home/teuthos/.teuthology/dispatcher/dispatcher.vps.4188345'} 2026-03-21T14:43:06.121 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-21T14:43:06.160 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-21T14:43:06.176 DEBUG:teuthology.orchestra.run.vm01:> sudo install -d -m0777 -- /var/run/ceph 2026-03-21T14:43:06.218 DEBUG:teuthology.orchestra.run.vm05:> sudo install -d -m0777 -- /var/run/ceph 2026-03-21T14:43:06.243 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:06.243 DEBUG:teuthology.orchestra.run.vm01:> dd if=/scratch_devs of=/dev/stdout 2026-03-21T14:43:06.299 DEBUG:teuthology.misc:devs=['/dev/vg_nvme/lv_1', '/dev/vg_nvme/lv_2', '/dev/vg_nvme/lv_3', '/dev/vg_nvme/lv_4'] 2026-03-21T14:43:06.299 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vg_nvme/lv_1 2026-03-21T14:43:06.355 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vg_nvme/lv_1 -> ../dm-0 2026-03-21T14:43:06.355 INFO:teuthology.orchestra.run.vm01.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-21T14:43:06.355 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 632 Links: 1 2026-03-21T14:43:06.355 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-21T14:43:06.355 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:device_t:s0 2026-03-21T14:43:06.355 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-21 14:42:55.774442932 +0000 2026-03-21T14:43:06.355 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-21 14:41:42.863407126 +0000 2026-03-21T14:43:06.355 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-21 14:41:42.863407126 +0000 2026-03-21T14:43:06.355 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-21 14:41:42.863407126 +0000 2026-03-21T14:43:06.356 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vg_nvme/lv_1 of=/dev/null count=1 2026-03-21T14:43:06.419 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-21T14:43:06.419 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-21T14:43:06.419 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000137987 s, 3.7 MB/s 2026-03-21T14:43:06.420 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_1 2026-03-21T14:43:06.478 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vg_nvme/lv_2 2026-03-21T14:43:06.535 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vg_nvme/lv_2 -> ../dm-1 2026-03-21T14:43:06.535 INFO:teuthology.orchestra.run.vm01.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-21T14:43:06.535 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 694 Links: 1 2026-03-21T14:43:06.535 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-21T14:43:06.535 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:device_t:s0 2026-03-21T14:43:06.535 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-21 14:42:55.774442932 +0000 2026-03-21T14:43:06.535 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-21 14:41:43.093407501 +0000 2026-03-21T14:43:06.535 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-21 14:41:43.093407501 +0000 2026-03-21T14:43:06.535 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-21 14:41:43.093407501 +0000 2026-03-21T14:43:06.535 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vg_nvme/lv_2 of=/dev/null count=1 2026-03-21T14:43:06.598 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-21T14:43:06.598 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-21T14:43:06.598 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000134411 s, 3.8 MB/s 2026-03-21T14:43:06.599 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_2 2026-03-21T14:43:06.654 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vg_nvme/lv_3 2026-03-21T14:43:06.711 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vg_nvme/lv_3 -> ../dm-2 2026-03-21T14:43:06.711 INFO:teuthology.orchestra.run.vm01.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-21T14:43:06.711 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 723 Links: 1 2026-03-21T14:43:06.711 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-21T14:43:06.711 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:device_t:s0 2026-03-21T14:43:06.711 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-21 14:42:55.775442933 +0000 2026-03-21T14:43:06.711 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-21 14:41:43.314407861 +0000 2026-03-21T14:43:06.711 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-21 14:41:43.314407861 +0000 2026-03-21T14:43:06.711 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-21 14:41:43.314407861 +0000 2026-03-21T14:43:06.711 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vg_nvme/lv_3 of=/dev/null count=1 2026-03-21T14:43:06.774 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-21T14:43:06.774 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-21T14:43:06.774 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000127268 s, 4.0 MB/s 2026-03-21T14:43:06.774 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_3 2026-03-21T14:43:06.831 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vg_nvme/lv_4 2026-03-21T14:43:06.888 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vg_nvme/lv_4 -> ../dm-3 2026-03-21T14:43:06.888 INFO:teuthology.orchestra.run.vm01.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-21T14:43:06.888 INFO:teuthology.orchestra.run.vm01.stdout:Device: 6h/6d Inode: 768 Links: 1 2026-03-21T14:43:06.888 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-21T14:43:06.888 INFO:teuthology.orchestra.run.vm01.stdout:Context: system_u:object_r:device_t:s0 2026-03-21T14:43:06.888 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-21 14:42:55.775442933 +0000 2026-03-21T14:43:06.888 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-21 14:41:43.545408237 +0000 2026-03-21T14:43:06.888 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-21 14:41:43.545408237 +0000 2026-03-21T14:43:06.888 INFO:teuthology.orchestra.run.vm01.stdout: Birth: 2026-03-21 14:41:43.545408237 +0000 2026-03-21T14:43:06.888 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vg_nvme/lv_4 of=/dev/null count=1 2026-03-21T14:43:06.951 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-21T14:43:06.951 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-21T14:43:06.951 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000121667 s, 4.2 MB/s 2026-03-21T14:43:06.952 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_4 2026-03-21T14:43:07.007 INFO:tasks.ceph:osd dev map: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2'} 2026-03-21T14:43:07.007 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:43:07.007 DEBUG:teuthology.orchestra.run.vm05:> dd if=/scratch_devs of=/dev/stdout 2026-03-21T14:43:07.022 DEBUG:teuthology.misc:devs=['/dev/vg_nvme/lv_1', '/dev/vg_nvme/lv_2', '/dev/vg_nvme/lv_3', '/dev/vg_nvme/lv_4'] 2026-03-21T14:43:07.023 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vg_nvme/lv_1 2026-03-21T14:43:07.079 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vg_nvme/lv_1 -> ../dm-0 2026-03-21T14:43:07.079 INFO:teuthology.orchestra.run.vm05.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-21T14:43:07.079 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 631 Links: 1 2026-03-21T14:43:07.080 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-21T14:43:07.080 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:device_t:s0 2026-03-21T14:43:07.080 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-21 14:43:02.181517136 +0000 2026-03-21T14:43:07.080 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-21 14:41:47.180510740 +0000 2026-03-21T14:43:07.080 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-21 14:41:47.180510740 +0000 2026-03-21T14:43:07.080 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-21 14:41:47.180510740 +0000 2026-03-21T14:43:07.080 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vg_nvme/lv_1 of=/dev/null count=1 2026-03-21T14:43:07.145 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-21T14:43:07.145 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-21T14:43:07.145 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000150051 s, 3.4 MB/s 2026-03-21T14:43:07.146 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_1 2026-03-21T14:43:07.205 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vg_nvme/lv_2 2026-03-21T14:43:07.265 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vg_nvme/lv_2 -> ../dm-1 2026-03-21T14:43:07.265 INFO:teuthology.orchestra.run.vm05.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-21T14:43:07.265 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 694 Links: 1 2026-03-21T14:43:07.265 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-21T14:43:07.265 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:device_t:s0 2026-03-21T14:43:07.265 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-21 14:43:02.181517136 +0000 2026-03-21T14:43:07.265 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-21 14:41:47.437511636 +0000 2026-03-21T14:43:07.265 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-21 14:41:47.437511636 +0000 2026-03-21T14:43:07.265 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-21 14:41:47.437511636 +0000 2026-03-21T14:43:07.265 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vg_nvme/lv_2 of=/dev/null count=1 2026-03-21T14:43:07.329 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-21T14:43:07.329 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-21T14:43:07.329 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.00012896 s, 4.0 MB/s 2026-03-21T14:43:07.329 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_2 2026-03-21T14:43:07.385 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vg_nvme/lv_3 2026-03-21T14:43:07.440 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vg_nvme/lv_3 -> ../dm-2 2026-03-21T14:43:07.441 INFO:teuthology.orchestra.run.vm05.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-21T14:43:07.441 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 729 Links: 1 2026-03-21T14:43:07.441 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-21T14:43:07.441 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:device_t:s0 2026-03-21T14:43:07.441 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-21 14:43:02.181517136 +0000 2026-03-21T14:43:07.441 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-21 14:41:47.651512382 +0000 2026-03-21T14:43:07.441 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-21 14:41:47.651512382 +0000 2026-03-21T14:43:07.441 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-21 14:41:47.651512382 +0000 2026-03-21T14:43:07.441 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vg_nvme/lv_3 of=/dev/null count=1 2026-03-21T14:43:07.503 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-21T14:43:07.503 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-21T14:43:07.503 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000145883 s, 3.5 MB/s 2026-03-21T14:43:07.504 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_3 2026-03-21T14:43:07.560 DEBUG:teuthology.orchestra.run.vm05:> stat /dev/vg_nvme/lv_4 2026-03-21T14:43:07.618 INFO:teuthology.orchestra.run.vm05.stdout: File: /dev/vg_nvme/lv_4 -> ../dm-3 2026-03-21T14:43:07.618 INFO:teuthology.orchestra.run.vm05.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-21T14:43:07.618 INFO:teuthology.orchestra.run.vm05.stdout:Device: 6h/6d Inode: 745 Links: 1 2026-03-21T14:43:07.618 INFO:teuthology.orchestra.run.vm05.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-21T14:43:07.618 INFO:teuthology.orchestra.run.vm05.stdout:Context: system_u:object_r:device_t:s0 2026-03-21T14:43:07.618 INFO:teuthology.orchestra.run.vm05.stdout:Access: 2026-03-21 14:43:02.182517137 +0000 2026-03-21T14:43:07.618 INFO:teuthology.orchestra.run.vm05.stdout:Modify: 2026-03-21 14:41:47.881513184 +0000 2026-03-21T14:43:07.618 INFO:teuthology.orchestra.run.vm05.stdout:Change: 2026-03-21 14:41:47.881513184 +0000 2026-03-21T14:43:07.618 INFO:teuthology.orchestra.run.vm05.stdout: Birth: 2026-03-21 14:41:47.881513184 +0000 2026-03-21T14:43:07.618 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/dev/vg_nvme/lv_4 of=/dev/null count=1 2026-03-21T14:43:07.682 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records in 2026-03-21T14:43:07.682 INFO:teuthology.orchestra.run.vm05.stderr:1+0 records out 2026-03-21T14:43:07.682 INFO:teuthology.orchestra.run.vm05.stderr:512 bytes copied, 0.000128581 s, 4.0 MB/s 2026-03-21T14:43:07.683 DEBUG:teuthology.orchestra.run.vm05:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_4 2026-03-21T14:43:07.739 INFO:tasks.ceph:osd dev map: {'osd.2': '/dev/vg_nvme/lv_1', 'osd.3': '/dev/vg_nvme/lv_2'} 2026-03-21T14:43:07.739 INFO:tasks.ceph:remote_to_roles_to_devs: {Remote(name='ubuntu@vm01.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2'}, Remote(name='ubuntu@vm05.local'): {'osd.2': '/dev/vg_nvme/lv_1', 'osd.3': '/dev/vg_nvme/lv_2'}} 2026-03-21T14:43:07.739 INFO:tasks.ceph:Generating config... 2026-03-21T14:43:07.740 INFO:tasks.ceph:[client] rbd_persistent_cache_mode = rwl 2026-03-21T14:43:07.740 INFO:tasks.ceph:[client] rbd_persistent_cache_path = /home/ubuntu/cephtest/rbd-pwl-cache 2026-03-21T14:43:07.740 INFO:tasks.ceph:[client] rbd_persistent_cache_size = 1073741824 2026-03-21T14:43:07.740 INFO:tasks.ceph:[client] rbd_plugins = pwl_cache 2026-03-21T14:43:07.740 INFO:tasks.ceph:[global] mon warn on pool no app = False 2026-03-21T14:43:07.740 INFO:tasks.ceph:[mgr] debug mgr = 20 2026-03-21T14:43:07.740 INFO:tasks.ceph:[mgr] debug ms = 1 2026-03-21T14:43:07.740 INFO:tasks.ceph:[mon] debug mon = 20 2026-03-21T14:43:07.740 INFO:tasks.ceph:[mon] debug ms = 1 2026-03-21T14:43:07.740 INFO:tasks.ceph:[mon] debug paxos = 20 2026-03-21T14:43:07.740 INFO:tasks.ceph:[osd] debug ms = 1 2026-03-21T14:43:07.740 INFO:tasks.ceph:[osd] debug osd = 20 2026-03-21T14:43:07.740 INFO:tasks.ceph:[osd] osd mclock iops capacity threshold hdd = 49000 2026-03-21T14:43:07.740 INFO:tasks.ceph:Setting up mon.a... 2026-03-21T14:43:07.740 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring /etc/ceph/ceph.keyring 2026-03-21T14:43:07.777 INFO:teuthology.orchestra.run.vm01.stdout:creating /etc/ceph/ceph.keyring 2026-03-21T14:43:07.780 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=mon. /etc/ceph/ceph.keyring 2026-03-21T14:43:07.859 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-21T14:43:07.921 DEBUG:tasks.ceph:Ceph mon addresses: [('mon.a', '192.168.123.101'), ('mon.b', '192.168.123.105')] 2026-03-21T14:43:07.922 DEBUG:tasks.ceph:writing out conf {'global': {'chdir': '', 'pid file': '/var/run/ceph/$cluster-$name.pid', 'auth supported': 'cephx', 'filestore xattr use omap': 'true', 'mon clock drift allowed': '1.000', 'osd crush chooseleaf type': '0', 'auth debug': 'true', 'ms die on old message': 'true', 'ms die on bug': 'true', 'mon max pg per osd': '10000', 'mon pg warn max object skew': '0', 'osd_pool_default_pg_autoscale_mode': 'off', 'osd pool default size': '2', 'mon osd allow primary affinity': 'true', 'mon osd allow pg remap': 'true', 'mon warn on legacy crush tunables': 'false', 'mon warn on crush straw calc version zero': 'false', 'mon warn on no sortbitwise': 'false', 'mon warn on osd down out interval zero': 'false', 'mon warn on too few osds': 'false', 'mon_warn_on_pool_pg_num_not_power_of_two': 'false', 'mon_warn_on_pool_no_redundancy': 'false', 'mon_allow_pool_size_one': 'true', 'osd pool default erasure code profile': 'plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd', 'osd default data pool replay window': '5', 'mon allow pool delete': 'true', 'mon cluster log file level': 'debug', 'debug asserts on shutdown': 'true', 'mon health detail to clog': 'false', 'mon host': '192.168.123.101,192.168.123.105', 'mon warn on pool no app': False}, 'osd': {'osd journal size': '100', 'osd scrub load threshold': '5.0', 'osd scrub max interval': '600', 'osd mclock profile': 'high_recovery_ops', 'osd mclock skip benchmark': 'true', 'osd recover clone overlap': 'true', 'osd recovery max chunk': '1048576', 'osd debug shutdown': 'true', 'osd debug op order': 'true', 'osd debug verify stray on activate': 'true', 'osd debug trim objects': 'true', 'osd open classes on start': 'true', 'osd debug pg log writeout': 'true', 'osd deep scrub update digest min age': '30', 'osd map max advance': '10', 'journal zero on create': 'true', 'filestore ondisk finisher threads': '3', 'filestore apply finisher threads': '3', 'bdev debug aio': 'true', 'osd debug misdirected ops': 'true', 'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}, 'mgr': {'debug ms': 1, 'debug mgr': 20, 'debug mon': '20', 'debug auth': '20', 'mon reweight min pgs per osd': '4', 'mon reweight min bytes per osd': '10', 'mgr/telemetry/nag': 'false'}, 'mon': {'debug ms': 1, 'debug mon': 20, 'debug paxos': 20, 'debug auth': '20', 'mon data avail warn': '5', 'mon mgr mkfs grace': '240', 'mon reweight min pgs per osd': '4', 'mon osd reporter subtree level': 'osd', 'mon osd prime pg temp': 'true', 'mon reweight min bytes per osd': '10', 'auth mon ticket ttl': '660', 'auth service ticket ttl': '240', 'mon_warn_on_insecure_global_id_reclaim': 'false', 'mon_warn_on_insecure_global_id_reclaim_allowed': 'false', 'mon_down_mkfs_grace': '2m', 'mon_warn_on_filestore_osds': 'false'}, 'client': {'rgw cache enabled': 'true', 'rgw enable ops log': 'true', 'rgw enable usage log': 'true', 'log file': '/var/log/ceph/$cluster-$name.$pid.log', 'admin socket': '/var/run/ceph/$cluster-$name.$pid.asok', 'rbd_persistent_cache_mode': 'rwl', 'rbd_persistent_cache_path': '/home/ubuntu/cephtest/rbd-pwl-cache', 'rbd_persistent_cache_size': 1073741824, 'rbd_plugins': 'pwl_cache'}, 'mon.a': {}, 'mon.b': {}} 2026-03-21T14:43:07.922 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:07.922 DEBUG:teuthology.orchestra.run.vm01:> dd of=/home/ubuntu/cephtest/ceph.tmp.conf 2026-03-21T14:43:07.977 DEBUG:teuthology.orchestra.run.vm01:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage monmaptool -c /home/ubuntu/cephtest/ceph.tmp.conf --create --clobber --enable-all-features --add a 192.168.123.101 --add b 192.168.123.105 --print /home/ubuntu/cephtest/ceph.monmap 2026-03-21T14:43:08.050 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool: monmap file /home/ubuntu/cephtest/ceph.monmap 2026-03-21T14:43:08.050 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool: generated fsid b533d616-fa1d-488f-abe2-f7b7efba8c44 2026-03-21T14:43:08.050 INFO:teuthology.orchestra.run.vm01.stdout:setting min_mon_release = tentacle 2026-03-21T14:43:08.050 INFO:teuthology.orchestra.run.vm01.stdout:epoch 0 2026-03-21T14:43:08.050 INFO:teuthology.orchestra.run.vm01.stdout:fsid b533d616-fa1d-488f-abe2-f7b7efba8c44 2026-03-21T14:43:08.050 INFO:teuthology.orchestra.run.vm01.stdout:last_changed 2026-03-21T14:43:08.049456+0000 2026-03-21T14:43:08.050 INFO:teuthology.orchestra.run.vm01.stdout:created 2026-03-21T14:43:08.049456+0000 2026-03-21T14:43:08.050 INFO:teuthology.orchestra.run.vm01.stdout:min_mon_release 20 (tentacle) 2026-03-21T14:43:08.050 INFO:teuthology.orchestra.run.vm01.stdout:election_strategy: 1 2026-03-21T14:43:08.050 INFO:teuthology.orchestra.run.vm01.stdout:0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-21T14:43:08.050 INFO:teuthology.orchestra.run.vm01.stdout:1: [v2:192.168.123.105:3300/0,v1:192.168.123.105:6789/0] mon.b 2026-03-21T14:43:08.050 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool: writing epoch 0 to /home/ubuntu/cephtest/ceph.monmap (2 monitors) 2026-03-21T14:43:08.051 DEBUG:teuthology.orchestra.run.vm01:> rm -- /home/ubuntu/cephtest/ceph.tmp.conf 2026-03-21T14:43:08.105 INFO:tasks.ceph:Writing /etc/ceph/ceph.conf for FSID b533d616-fa1d-488f-abe2-f7b7efba8c44... 2026-03-21T14:43:08.106 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-21T14:43:08.147 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-21T14:43:08.185 INFO:teuthology.orchestra.run.vm01.stdout:[global] 2026-03-21T14:43:08.185 INFO:teuthology.orchestra.run.vm01.stdout: chdir = "" 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: auth supported = cephx 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: filestore xattr use omap = true 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon clock drift allowed = 1.000 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd crush chooseleaf type = 0 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: auth debug = true 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: ms die on old message = true 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: ms die on bug = true 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon pg warn max object skew = 0 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: # disable pg_autoscaler by default for new pools 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd pool default size = 2 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon osd allow primary affinity = true 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon osd allow pg remap = true 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on legacy crush tunables = false 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on crush straw calc version zero = false 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on no sortbitwise = false 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on osd down out interval zero = false 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on too few osds = false 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon_allow_pool_size_one = true 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd default data pool replay window = 5 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon allow pool delete = true 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon cluster log file level = debug 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: debug asserts on shutdown = true 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon health detail to clog = false 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon host = "192.168.123.101,192.168.123.105" 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on pool no app = False 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: fsid = b533d616-fa1d-488f-abe2-f7b7efba8c44 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout:[osd] 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd journal size = 100 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd scrub load threshold = 5.0 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd scrub max interval = 600 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd mclock profile = high_recovery_ops 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd mclock skip benchmark = true 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd recover clone overlap = true 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd recovery max chunk = 1048576 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd debug shutdown = true 2026-03-21T14:43:08.186 INFO:teuthology.orchestra.run.vm01.stdout: osd debug op order = true 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: osd debug verify stray on activate = true 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: osd debug trim objects = true 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: osd open classes on start = true 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: osd debug pg log writeout = true 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: osd deep scrub update digest min age = 30 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: osd map max advance = 10 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: journal zero on create = true 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: filestore ondisk finisher threads = 3 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: filestore apply finisher threads = 3 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: bdev debug aio = true 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: osd debug misdirected ops = true 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: debug ms = 1 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: debug osd = 20 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout:[mgr] 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: debug ms = 1 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: debug mgr = 20 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: debug mon = 20 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: debug auth = 20 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: mon reweight min pgs per osd = 4 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: mon reweight min bytes per osd = 10 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: mgr/telemetry/nag = false 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout:[mon] 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: debug ms = 1 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: debug mon = 20 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: debug paxos = 20 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: debug auth = 20 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: mon data avail warn = 5 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: mon mgr mkfs grace = 240 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: mon reweight min pgs per osd = 4 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: mon osd reporter subtree level = osd 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: mon osd prime pg temp = true 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: mon reweight min bytes per osd = 10 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: auth mon ticket ttl = 660 # 11m 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: auth service ticket ttl = 240 # 4m 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: # don't complain about insecure global_id in the test suite 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: # 1m isn't quite enough 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: mon_down_mkfs_grace = 2m 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_filestore_osds = false 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout:[client] 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: rgw cache enabled = true 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: rgw enable ops log = true 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: rgw enable usage log = true 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: rbd_persistent_cache_mode = rwl 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: rbd_persistent_cache_path = /home/ubuntu/cephtest/rbd-pwl-cache 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: rbd_persistent_cache_size = 1073741824 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout: rbd_plugins = pwl_cache 2026-03-21T14:43:08.187 INFO:teuthology.orchestra.run.vm01.stdout:[mon.a] 2026-03-21T14:43:08.188 INFO:teuthology.orchestra.run.vm01.stdout:[mon.b] 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout:[global] 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: chdir = "" 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: auth supported = cephx 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: filestore xattr use omap = true 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon clock drift allowed = 1.000 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: osd crush chooseleaf type = 0 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: auth debug = true 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: ms die on old message = true 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: ms die on bug = true 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon pg warn max object skew = 0 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: # disable pg_autoscaler by default for new pools 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: osd pool default size = 2 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon osd allow primary affinity = true 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon osd allow pg remap = true 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on legacy crush tunables = false 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on crush straw calc version zero = false 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on no sortbitwise = false 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on osd down out interval zero = false 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on too few osds = false 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon_allow_pool_size_one = true 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: osd default data pool replay window = 5 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon allow pool delete = true 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon cluster log file level = debug 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: debug asserts on shutdown = true 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon health detail to clog = false 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon host = "192.168.123.101,192.168.123.105" 2026-03-21T14:43:08.189 INFO:teuthology.orchestra.run.vm05.stdout: mon warn on pool no app = False 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: fsid = b533d616-fa1d-488f-abe2-f7b7efba8c44 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout:[osd] 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd journal size = 100 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd scrub load threshold = 5.0 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd scrub max interval = 600 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd mclock profile = high_recovery_ops 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd mclock skip benchmark = true 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd recover clone overlap = true 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd recovery max chunk = 1048576 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd debug shutdown = true 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd debug op order = true 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd debug verify stray on activate = true 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd debug trim objects = true 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd open classes on start = true 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd debug pg log writeout = true 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd deep scrub update digest min age = 30 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd map max advance = 10 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: journal zero on create = true 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: filestore ondisk finisher threads = 3 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: filestore apply finisher threads = 3 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: bdev debug aio = true 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd debug misdirected ops = true 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: debug ms = 1 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: debug osd = 20 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout:[mgr] 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: debug ms = 1 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: debug mgr = 20 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: debug mon = 20 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: debug auth = 20 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: mon reweight min pgs per osd = 4 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: mon reweight min bytes per osd = 10 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: mgr/telemetry/nag = false 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout:[mon] 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: debug ms = 1 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: debug mon = 20 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: debug paxos = 20 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: debug auth = 20 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: mon data avail warn = 5 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: mon mgr mkfs grace = 240 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: mon reweight min pgs per osd = 4 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: mon osd reporter subtree level = osd 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: mon osd prime pg temp = true 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: mon reweight min bytes per osd = 10 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: auth mon ticket ttl = 660 # 11m 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: auth service ticket ttl = 240 # 4m 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: # don't complain about insecure global_id in the test suite 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.190 INFO:teuthology.orchestra.run.vm05.stdout: # 1m isn't quite enough 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout: mon_down_mkfs_grace = 2m 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout: mon_warn_on_filestore_osds = false 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout:[client] 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout: rgw cache enabled = true 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout: rgw enable ops log = true 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout: rgw enable usage log = true 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout: rbd_persistent_cache_mode = rwl 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout: rbd_persistent_cache_path = /home/ubuntu/cephtest/rbd-pwl-cache 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout: rbd_persistent_cache_size = 1073741824 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout: rbd_plugins = pwl_cache 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout:[mon.a] 2026-03-21T14:43:08.191 INFO:teuthology.orchestra.run.vm05.stdout:[mon.b] 2026-03-21T14:43:08.201 INFO:tasks.ceph:Creating admin key on mon.a... 2026-03-21T14:43:08.201 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /etc/ceph/ceph.keyring 2026-03-21T14:43:08.279 INFO:tasks.ceph:Copying monmap to all nodes... 2026-03-21T14:43:08.279 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:08.279 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.keyring of=/dev/stdout 2026-03-21T14:43:08.333 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:08.333 DEBUG:teuthology.orchestra.run.vm01:> dd if=/home/ubuntu/cephtest/ceph.monmap of=/dev/stdout 2026-03-21T14:43:08.388 INFO:tasks.ceph:Sending monmap to node ubuntu@vm01.local 2026-03-21T14:43:08.388 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:08.388 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-21T14:43:08.388 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-21T14:43:08.459 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:08.459 DEBUG:teuthology.orchestra.run.vm01:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-21T14:43:08.513 INFO:tasks.ceph:Sending monmap to node ubuntu@vm05.local 2026-03-21T14:43:08.513 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:43:08.513 DEBUG:teuthology.orchestra.run.vm05:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-21T14:43:08.513 DEBUG:teuthology.orchestra.run.vm05:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-21T14:43:08.544 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:43:08.545 DEBUG:teuthology.orchestra.run.vm05:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-21T14:43:08.602 INFO:tasks.ceph:Setting up mon nodes... 2026-03-21T14:43:08.602 INFO:tasks.ceph:Setting up mgr nodes... 2026-03-21T14:43:08.602 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /var/lib/ceph/mgr/ceph-x && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=mgr.x /var/lib/ceph/mgr/ceph-x/keyring 2026-03-21T14:43:08.644 INFO:teuthology.orchestra.run.vm01.stdout:creating /var/lib/ceph/mgr/ceph-x/keyring 2026-03-21T14:43:08.646 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /var/lib/ceph/mgr/ceph-y && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=mgr.y /var/lib/ceph/mgr/ceph-y/keyring 2026-03-21T14:43:08.693 INFO:teuthology.orchestra.run.vm05.stdout:creating /var/lib/ceph/mgr/ceph-y/keyring 2026-03-21T14:43:08.696 INFO:tasks.ceph:Setting up mds nodes... 2026-03-21T14:43:08.696 INFO:tasks.ceph_client:Setting up client nodes... 2026-03-21T14:43:08.696 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=client.0 /etc/ceph/ceph.client.0.keyring && sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-21T14:43:08.731 INFO:teuthology.orchestra.run.vm05.stdout:creating /etc/ceph/ceph.client.0.keyring 2026-03-21T14:43:08.743 INFO:tasks.ceph:Running mkfs on osd nodes... 2026-03-21T14:43:08.743 INFO:tasks.ceph:ctx.disk_config.remote_to_roles_to_dev: {Remote(name='ubuntu@vm01.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2'}, Remote(name='ubuntu@vm05.local'): {'osd.2': '/dev/vg_nvme/lv_1', 'osd.3': '/dev/vg_nvme/lv_2'}} 2026-03-21T14:43:08.743 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /var/lib/ceph/osd/ceph-0 2026-03-21T14:43:08.767 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2'} 2026-03-21T14:43:08.767 INFO:tasks.ceph:role: osd.0 2026-03-21T14:43:08.767 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_1 on ubuntu@vm01.local 2026-03-21T14:43:08.767 DEBUG:teuthology.orchestra.run.vm01:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_1 2026-03-21T14:43:08.831 INFO:teuthology.orchestra.run.vm01.stdout:meta-data=/dev/vg_nvme/lv_1 isize=2048 agcount=4, agsize=1310464 blks 2026-03-21T14:43:08.831 INFO:teuthology.orchestra.run.vm01.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-21T14:43:08.831 INFO:teuthology.orchestra.run.vm01.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-21T14:43:08.832 INFO:teuthology.orchestra.run.vm01.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-21T14:43:08.832 INFO:teuthology.orchestra.run.vm01.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-21T14:43:08.832 INFO:teuthology.orchestra.run.vm01.stdout: = sunit=0 swidth=0 blks 2026-03-21T14:43:08.832 INFO:teuthology.orchestra.run.vm01.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-21T14:43:08.832 INFO:teuthology.orchestra.run.vm01.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-21T14:43:08.832 INFO:teuthology.orchestra.run.vm01.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-21T14:43:08.832 INFO:teuthology.orchestra.run.vm01.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-21T14:43:08.836 INFO:teuthology.orchestra.run.vm01.stdout:Discarding blocks...Done. 2026-03-21T14:43:08.839 INFO:tasks.ceph:mount /dev/vg_nvme/lv_1 on ubuntu@vm01.local -o noatime 2026-03-21T14:43:08.839 DEBUG:teuthology.orchestra.run.vm01:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_1 /var/lib/ceph/osd/ceph-0 2026-03-21T14:43:08.911 DEBUG:teuthology.orchestra.run.vm01:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-0 2026-03-21T14:43:08.977 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /var/lib/ceph/osd/ceph-1 2026-03-21T14:43:09.042 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2'} 2026-03-21T14:43:09.042 INFO:tasks.ceph:role: osd.1 2026-03-21T14:43:09.042 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_2 on ubuntu@vm01.local 2026-03-21T14:43:09.042 DEBUG:teuthology.orchestra.run.vm01:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2 2026-03-21T14:43:09.107 INFO:teuthology.orchestra.run.vm01.stdout:meta-data=/dev/vg_nvme/lv_2 isize=2048 agcount=4, agsize=1310464 blks 2026-03-21T14:43:09.107 INFO:teuthology.orchestra.run.vm01.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-21T14:43:09.107 INFO:teuthology.orchestra.run.vm01.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-21T14:43:09.107 INFO:teuthology.orchestra.run.vm01.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-21T14:43:09.107 INFO:teuthology.orchestra.run.vm01.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-21T14:43:09.107 INFO:teuthology.orchestra.run.vm01.stdout: = sunit=0 swidth=0 blks 2026-03-21T14:43:09.107 INFO:teuthology.orchestra.run.vm01.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-21T14:43:09.107 INFO:teuthology.orchestra.run.vm01.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-21T14:43:09.107 INFO:teuthology.orchestra.run.vm01.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-21T14:43:09.107 INFO:teuthology.orchestra.run.vm01.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-21T14:43:09.112 INFO:teuthology.orchestra.run.vm01.stdout:Discarding blocks...Done. 2026-03-21T14:43:09.114 INFO:tasks.ceph:mount /dev/vg_nvme/lv_2 on ubuntu@vm01.local -o noatime 2026-03-21T14:43:09.114 DEBUG:teuthology.orchestra.run.vm01:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_2 /var/lib/ceph/osd/ceph-1 2026-03-21T14:43:09.185 DEBUG:teuthology.orchestra.run.vm01:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-1 2026-03-21T14:43:09.254 DEBUG:teuthology.orchestra.run.vm01:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-21T14:43:09.334 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T14:43:09.332+0000 7fd161411900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory 2026-03-21T14:43:09.334 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T14:43:09.332+0000 7fd161411900 -1 created new key in keyring /var/lib/ceph/osd/ceph-0/keyring 2026-03-21T14:43:09.334 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T14:43:09.332+0000 7fd161411900 -1 bdev(0x556a67c5d800 /var/lib/ceph/osd/ceph-0/block) open stat got: (1) Operation not permitted 2026-03-21T14:43:09.334 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T14:43:09.332+0000 7fd161411900 -1 bluestore(/var/lib/ceph/osd/ceph-0) _read_fsid unparsable uuid 2026-03-21T14:43:09.805 DEBUG:teuthology.orchestra.run.vm01:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-21T14:43:09.872 DEBUG:teuthology.orchestra.run.vm01:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 1 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-21T14:43:09.950 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T14:43:09.948+0000 7ff4c292d900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-1/keyring: can't open /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2026-03-21T14:43:09.950 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T14:43:09.948+0000 7ff4c292d900 -1 created new key in keyring /var/lib/ceph/osd/ceph-1/keyring 2026-03-21T14:43:09.950 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T14:43:09.948+0000 7ff4c292d900 -1 bdev(0x562e0013b800 /var/lib/ceph/osd/ceph-1/block) open stat got: (1) Operation not permitted 2026-03-21T14:43:09.950 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T14:43:09.948+0000 7ff4c292d900 -1 bluestore(/var/lib/ceph/osd/ceph-1) _read_fsid unparsable uuid 2026-03-21T14:43:10.422 DEBUG:teuthology.orchestra.run.vm01:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-21T14:43:10.488 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /var/lib/ceph/osd/ceph-2 2026-03-21T14:43:10.511 INFO:tasks.ceph:roles_to_devs: {'osd.2': '/dev/vg_nvme/lv_1', 'osd.3': '/dev/vg_nvme/lv_2'} 2026-03-21T14:43:10.511 INFO:tasks.ceph:role: osd.2 2026-03-21T14:43:10.511 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_1 on ubuntu@vm05.local 2026-03-21T14:43:10.512 DEBUG:teuthology.orchestra.run.vm05:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_1 2026-03-21T14:43:10.578 INFO:teuthology.orchestra.run.vm05.stdout:meta-data=/dev/vg_nvme/lv_1 isize=2048 agcount=4, agsize=1310464 blks 2026-03-21T14:43:10.578 INFO:teuthology.orchestra.run.vm05.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-21T14:43:10.578 INFO:teuthology.orchestra.run.vm05.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-21T14:43:10.578 INFO:teuthology.orchestra.run.vm05.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-21T14:43:10.578 INFO:teuthology.orchestra.run.vm05.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-21T14:43:10.578 INFO:teuthology.orchestra.run.vm05.stdout: = sunit=0 swidth=0 blks 2026-03-21T14:43:10.578 INFO:teuthology.orchestra.run.vm05.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-21T14:43:10.578 INFO:teuthology.orchestra.run.vm05.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-21T14:43:10.578 INFO:teuthology.orchestra.run.vm05.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-21T14:43:10.578 INFO:teuthology.orchestra.run.vm05.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-21T14:43:10.583 INFO:teuthology.orchestra.run.vm05.stdout:Discarding blocks...Done. 2026-03-21T14:43:10.587 INFO:tasks.ceph:mount /dev/vg_nvme/lv_1 on ubuntu@vm05.local -o noatime 2026-03-21T14:43:10.587 DEBUG:teuthology.orchestra.run.vm05:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_1 /var/lib/ceph/osd/ceph-2 2026-03-21T14:43:10.657 DEBUG:teuthology.orchestra.run.vm05:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-2 2026-03-21T14:43:10.723 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /var/lib/ceph/osd/ceph-3 2026-03-21T14:43:10.791 INFO:tasks.ceph:roles_to_devs: {'osd.2': '/dev/vg_nvme/lv_1', 'osd.3': '/dev/vg_nvme/lv_2'} 2026-03-21T14:43:10.791 INFO:tasks.ceph:role: osd.3 2026-03-21T14:43:10.791 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_2 on ubuntu@vm05.local 2026-03-21T14:43:10.791 DEBUG:teuthology.orchestra.run.vm05:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2 2026-03-21T14:43:10.856 INFO:teuthology.orchestra.run.vm05.stdout:meta-data=/dev/vg_nvme/lv_2 isize=2048 agcount=4, agsize=1310464 blks 2026-03-21T14:43:10.856 INFO:teuthology.orchestra.run.vm05.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-21T14:43:10.856 INFO:teuthology.orchestra.run.vm05.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-21T14:43:10.856 INFO:teuthology.orchestra.run.vm05.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-21T14:43:10.856 INFO:teuthology.orchestra.run.vm05.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-21T14:43:10.856 INFO:teuthology.orchestra.run.vm05.stdout: = sunit=0 swidth=0 blks 2026-03-21T14:43:10.856 INFO:teuthology.orchestra.run.vm05.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-21T14:43:10.856 INFO:teuthology.orchestra.run.vm05.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-21T14:43:10.856 INFO:teuthology.orchestra.run.vm05.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-21T14:43:10.856 INFO:teuthology.orchestra.run.vm05.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-21T14:43:10.861 INFO:teuthology.orchestra.run.vm05.stdout:Discarding blocks...Done. 2026-03-21T14:43:10.863 INFO:tasks.ceph:mount /dev/vg_nvme/lv_2 on ubuntu@vm05.local -o noatime 2026-03-21T14:43:10.863 DEBUG:teuthology.orchestra.run.vm05:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_2 /var/lib/ceph/osd/ceph-3 2026-03-21T14:43:10.933 DEBUG:teuthology.orchestra.run.vm05:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-3 2026-03-21T14:43:11.001 DEBUG:teuthology.orchestra.run.vm05:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 2 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-21T14:43:11.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-21T14:43:11.082+0000 7f35f7c0f900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-2/keyring: can't open /var/lib/ceph/osd/ceph-2/keyring: (2) No such file or directory 2026-03-21T14:43:11.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-21T14:43:11.082+0000 7f35f7c0f900 -1 created new key in keyring /var/lib/ceph/osd/ceph-2/keyring 2026-03-21T14:43:11.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-21T14:43:11.082+0000 7f35f7c0f900 -1 bdev(0x55a295acb800 /var/lib/ceph/osd/ceph-2/block) open stat got: (1) Operation not permitted 2026-03-21T14:43:11.084 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-21T14:43:11.082+0000 7f35f7c0f900 -1 bluestore(/var/lib/ceph/osd/ceph-2) _read_fsid unparsable uuid 2026-03-21T14:43:11.536 DEBUG:teuthology.orchestra.run.vm05:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-21T14:43:11.601 DEBUG:teuthology.orchestra.run.vm05:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 3 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-21T14:43:11.678 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-21T14:43:11.676+0000 7fea99541900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-3/keyring: can't open /var/lib/ceph/osd/ceph-3/keyring: (2) No such file or directory 2026-03-21T14:43:11.679 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-21T14:43:11.677+0000 7fea99541900 -1 created new key in keyring /var/lib/ceph/osd/ceph-3/keyring 2026-03-21T14:43:11.679 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-21T14:43:11.677+0000 7fea99541900 -1 bdev(0x5575d0e79800 /var/lib/ceph/osd/ceph-3/block) open stat got: (1) Operation not permitted 2026-03-21T14:43:11.679 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-21T14:43:11.677+0000 7fea99541900 -1 bluestore(/var/lib/ceph/osd/ceph-3) _read_fsid unparsable uuid 2026-03-21T14:43:12.148 DEBUG:teuthology.orchestra.run.vm05:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-3 2026-03-21T14:43:12.212 INFO:tasks.ceph:Reading keys from all nodes... 2026-03-21T14:43:12.213 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:12.213 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/mgr/ceph-x/keyring of=/dev/stdout 2026-03-21T14:43:12.236 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:12.236 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/osd/ceph-0/keyring of=/dev/stdout 2026-03-21T14:43:12.298 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:12.298 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/osd/ceph-1/keyring of=/dev/stdout 2026-03-21T14:43:12.361 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:43:12.361 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/mgr/ceph-y/keyring of=/dev/stdout 2026-03-21T14:43:12.387 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:43:12.387 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/osd/ceph-2/keyring of=/dev/stdout 2026-03-21T14:43:12.453 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:43:12.453 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/osd/ceph-3/keyring of=/dev/stdout 2026-03-21T14:43:12.517 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:43:12.517 DEBUG:teuthology.orchestra.run.vm05:> dd if=/etc/ceph/ceph.client.0.keyring of=/dev/stdout 2026-03-21T14:43:12.573 INFO:tasks.ceph:Adding keys to all mons... 2026-03-21T14:43:12.573 DEBUG:teuthology.orchestra.run.vm01:> sudo tee -a /etc/ceph/ceph.keyring 2026-03-21T14:43:12.575 DEBUG:teuthology.orchestra.run.vm05:> sudo tee -a /etc/ceph/ceph.keyring 2026-03-21T14:43:12.615 INFO:teuthology.orchestra.run.vm01.stdout:[mgr.x] 2026-03-21T14:43:12.616 INFO:teuthology.orchestra.run.vm01.stdout: key = AQB8rr5pdDNVJhAARRp8z3avsVZnrALdl2YuOg== 2026-03-21T14:43:12.616 INFO:teuthology.orchestra.run.vm01.stdout:[osd.0] 2026-03-21T14:43:12.616 INFO:teuthology.orchestra.run.vm01.stdout: key = AQB9rr5pIhriExAAgY1bIichVRLdMHhKiPF0EA== 2026-03-21T14:43:12.616 INFO:teuthology.orchestra.run.vm01.stdout:[osd.1] 2026-03-21T14:43:12.616 INFO:teuthology.orchestra.run.vm01.stdout: key = AQB9rr5p30KVOBAAL1q+9at7dAmwV7Ow48neng== 2026-03-21T14:43:12.616 INFO:teuthology.orchestra.run.vm01.stdout:[mgr.y] 2026-03-21T14:43:12.616 INFO:teuthology.orchestra.run.vm01.stdout: key = AQB8rr5przVIKRAAah8ACoi4WsPfyxvgPne45g== 2026-03-21T14:43:12.616 INFO:teuthology.orchestra.run.vm01.stdout:[osd.2] 2026-03-21T14:43:12.616 INFO:teuthology.orchestra.run.vm01.stdout: key = AQB/rr5piq3+BBAAwehzucizAHf65T3UC76+wQ== 2026-03-21T14:43:12.616 INFO:teuthology.orchestra.run.vm01.stdout:[osd.3] 2026-03-21T14:43:12.616 INFO:teuthology.orchestra.run.vm01.stdout: key = AQB/rr5pm09qKBAAn8K8juJ7UhMVGEc+FBorgQ== 2026-03-21T14:43:12.616 INFO:teuthology.orchestra.run.vm01.stdout:[client.0] 2026-03-21T14:43:12.616 INFO:teuthology.orchestra.run.vm01.stdout: key = AQB8rr5p6DKRKxAA0TVATa46zQ2Oe7uf3zMbvg== 2026-03-21T14:43:12.637 INFO:teuthology.orchestra.run.vm05.stdout:[mgr.x] 2026-03-21T14:43:12.638 INFO:teuthology.orchestra.run.vm05.stdout: key = AQB8rr5pdDNVJhAARRp8z3avsVZnrALdl2YuOg== 2026-03-21T14:43:12.638 INFO:teuthology.orchestra.run.vm05.stdout:[osd.0] 2026-03-21T14:43:12.638 INFO:teuthology.orchestra.run.vm05.stdout: key = AQB9rr5pIhriExAAgY1bIichVRLdMHhKiPF0EA== 2026-03-21T14:43:12.638 INFO:teuthology.orchestra.run.vm05.stdout:[osd.1] 2026-03-21T14:43:12.638 INFO:teuthology.orchestra.run.vm05.stdout: key = AQB9rr5p30KVOBAAL1q+9at7dAmwV7Ow48neng== 2026-03-21T14:43:12.638 INFO:teuthology.orchestra.run.vm05.stdout:[mgr.y] 2026-03-21T14:43:12.638 INFO:teuthology.orchestra.run.vm05.stdout: key = AQB8rr5przVIKRAAah8ACoi4WsPfyxvgPne45g== 2026-03-21T14:43:12.638 INFO:teuthology.orchestra.run.vm05.stdout:[osd.2] 2026-03-21T14:43:12.638 INFO:teuthology.orchestra.run.vm05.stdout: key = AQB/rr5piq3+BBAAwehzucizAHf65T3UC76+wQ== 2026-03-21T14:43:12.638 INFO:teuthology.orchestra.run.vm05.stdout:[osd.3] 2026-03-21T14:43:12.638 INFO:teuthology.orchestra.run.vm05.stdout: key = AQB/rr5pm09qKBAAn8K8juJ7UhMVGEc+FBorgQ== 2026-03-21T14:43:12.638 INFO:teuthology.orchestra.run.vm05.stdout:[client.0] 2026-03-21T14:43:12.638 INFO:teuthology.orchestra.run.vm05.stdout: key = AQB8rr5p6DKRKxAA0TVATa46zQ2Oe7uf3zMbvg== 2026-03-21T14:43:12.638 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.x --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-21T14:43:12.659 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.x --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-21T14:43:12.719 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.0 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-21T14:43:12.741 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.0 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-21T14:43:12.801 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.1 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-21T14:43:12.823 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.1 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-21T14:43:12.881 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.y --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-21T14:43:12.904 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.y --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-21T14:43:12.963 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.2 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-21T14:43:12.987 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.2 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-21T14:43:13.045 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.3 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-21T14:43:13.069 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.3 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-21T14:43:13.128 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.0 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-21T14:43:13.150 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.0 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-21T14:43:13.211 INFO:tasks.ceph:Running mkfs on mon nodes... 2026-03-21T14:43:13.211 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /var/lib/ceph/mon/ceph-a 2026-03-21T14:43:13.236 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i a --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-03-21T14:43:13.330 DEBUG:teuthology.orchestra.run.vm01:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-a 2026-03-21T14:43:13.356 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /var/lib/ceph/mon/ceph-b 2026-03-21T14:43:13.380 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i b --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-03-21T14:43:13.471 DEBUG:teuthology.orchestra.run.vm05:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-b 2026-03-21T14:43:13.494 DEBUG:teuthology.orchestra.run.vm01:> rm -- /home/ubuntu/cephtest/ceph.monmap 2026-03-21T14:43:13.496 DEBUG:teuthology.orchestra.run.vm05:> rm -- /home/ubuntu/cephtest/ceph.monmap 2026-03-21T14:43:13.548 INFO:tasks.ceph:Starting mon daemons in cluster ceph... 2026-03-21T14:43:13.549 INFO:tasks.ceph.mon.a:Restarting daemon 2026-03-21T14:43:13.549 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i a 2026-03-21T14:43:13.551 INFO:tasks.ceph.mon.a:Started 2026-03-21T14:43:13.551 INFO:tasks.ceph.mon.b:Restarting daemon 2026-03-21T14:43:13.551 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i b 2026-03-21T14:43:13.591 INFO:tasks.ceph.mon.b:Started 2026-03-21T14:43:13.591 INFO:tasks.ceph:Starting mgr daemons in cluster ceph... 2026-03-21T14:43:13.592 INFO:tasks.ceph.mgr.x:Restarting daemon 2026-03-21T14:43:13.592 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i x 2026-03-21T14:43:13.593 INFO:tasks.ceph.mgr.x:Started 2026-03-21T14:43:13.593 INFO:tasks.ceph.mgr.y:Restarting daemon 2026-03-21T14:43:13.593 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i y 2026-03-21T14:43:13.595 INFO:tasks.ceph.mgr.y:Started 2026-03-21T14:43:13.595 DEBUG:tasks.ceph:set 0 configs 2026-03-21T14:43:13.595 DEBUG:teuthology.orchestra.run.vm01:> sudo ceph --cluster ceph config dump 2026-03-21T14:43:13.886 INFO:teuthology.orchestra.run.vm01.stdout:WHO MASK LEVEL OPTION VALUE RO 2026-03-21T14:43:13.897 INFO:tasks.ceph:Setting crush tunables to default 2026-03-21T14:43:13.897 DEBUG:teuthology.orchestra.run.vm01:> sudo ceph --cluster ceph osd crush tunables default 2026-03-21T14:43:14.012 INFO:teuthology.orchestra.run.vm01.stderr:adjusted tunables profile to default 2026-03-21T14:43:14.033 INFO:tasks.ceph:check_enable_crimson: False 2026-03-21T14:43:14.033 INFO:tasks.ceph:Starting osd daemons in cluster ceph... 2026-03-21T14:43:14.033 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:14.033 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/osd/ceph-0/fsid of=/dev/stdout 2026-03-21T14:43:14.071 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T14:43:14.071 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/osd/ceph-1/fsid of=/dev/stdout 2026-03-21T14:43:14.137 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:43:14.137 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/osd/ceph-2/fsid of=/dev/stdout 2026-03-21T14:43:14.162 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:43:14.162 DEBUG:teuthology.orchestra.run.vm05:> sudo dd if=/var/lib/ceph/osd/ceph-3/fsid of=/dev/stdout 2026-03-21T14:43:14.228 DEBUG:teuthology.orchestra.run.vm05:> sudo ceph --cluster ceph osd new 6dd81534-8c99-46c7-bbfb-362bd5315e72 0 2026-03-21T14:43:14.392 INFO:teuthology.orchestra.run.vm05.stdout:0 2026-03-21T14:43:14.402 DEBUG:teuthology.orchestra.run.vm05:> sudo ceph --cluster ceph osd new f9879bc9-c485-4448-946b-608bd3c5f1b6 1 2026-03-21T14:43:14.528 INFO:teuthology.orchestra.run.vm05.stdout:1 2026-03-21T14:43:14.540 DEBUG:teuthology.orchestra.run.vm05:> sudo ceph --cluster ceph osd new 7ecdf495-8d95-49ec-be60-da5e00cecd99 2 2026-03-21T14:43:14.664 INFO:teuthology.orchestra.run.vm05.stdout:2 2026-03-21T14:43:14.675 DEBUG:teuthology.orchestra.run.vm05:> sudo ceph --cluster ceph osd new d0911d39-8504-4ebe-9bb2-cdd1b7decec2 3 2026-03-21T14:43:14.677 INFO:tasks.ceph.mgr.y.vm05.stderr:/usr/lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-21T14:43:14.677 INFO:tasks.ceph.mgr.y.vm05.stderr:Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-21T14:43:14.677 INFO:tasks.ceph.mgr.y.vm05.stderr: from numpy import show_config as show_numpy_config 2026-03-21T14:43:14.684 INFO:tasks.ceph.mgr.x.vm01.stderr:/usr/lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-21T14:43:14.684 INFO:tasks.ceph.mgr.x.vm01.stderr:Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-21T14:43:14.684 INFO:tasks.ceph.mgr.x.vm01.stderr: from numpy import show_config as show_numpy_config 2026-03-21T14:43:14.804 INFO:teuthology.orchestra.run.vm05.stdout:3 2026-03-21T14:43:14.821 INFO:tasks.ceph.osd.0:Restarting daemon 2026-03-21T14:43:14.821 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0 2026-03-21T14:43:14.823 INFO:tasks.ceph.osd.0:Started 2026-03-21T14:43:14.823 INFO:tasks.ceph.osd.1:Restarting daemon 2026-03-21T14:43:14.823 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1 2026-03-21T14:43:14.826 INFO:tasks.ceph.osd.1:Started 2026-03-21T14:43:14.826 INFO:tasks.ceph.osd.2:Restarting daemon 2026-03-21T14:43:14.826 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2 2026-03-21T14:43:14.865 INFO:tasks.ceph.osd.2:Started 2026-03-21T14:43:14.865 INFO:tasks.ceph.osd.3:Restarting daemon 2026-03-21T14:43:14.865 DEBUG:teuthology.orchestra.run.vm05:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 3 2026-03-21T14:43:14.868 INFO:tasks.ceph.osd.3:Started 2026-03-21T14:43:14.868 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-21T14:43:14.995 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:14.995 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":6,"fsid":"b533d616-fa1d-488f-abe2-f7b7efba8c44","created":"2026-03-21T14:43:13.834429+0000","modified":"2026-03-21T14:43:14.799411+0000","last_up_change":"0.000000","last_in_change":"2026-03-21T14:43:14.799411+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":4,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"6dd81534-8c99-46c7-bbfb-362bd5315e72","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"f9879bc9-c485-4448-946b-608bd3c5f1b6","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"7ecdf495-8d95-49ec-be60-da5e00cecd99","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":3,"uuid":"d0911d39-8504-4ebe-9bb2-cdd1b7decec2","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-21T14:43:15.004 INFO:tasks.ceph.ceph_manager.ceph:[] 2026-03-21T14:43:15.004 INFO:tasks.ceph:Waiting for OSDs to come up 2026-03-21T14:43:15.016 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T14:43:15.014+0000 7f91f8c05900 -1 Falling back to public interface 2026-03-21T14:43:15.018 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T14:43:15.016+0000 7f75453c1900 -1 Falling back to public interface 2026-03-21T14:43:15.077 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T14:43:15.075+0000 7fc238b97900 -1 Falling back to public interface 2026-03-21T14:43:15.082 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T14:43:15.080+0000 7fc72c012900 -1 Falling back to public interface 2026-03-21T14:43:15.147 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T14:43:15.144+0000 7f75453c1900 -1 osd.0 0 log_to_monitors true 2026-03-21T14:43:15.206 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T14:43:15.204+0000 7f91f8c05900 -1 osd.1 0 log_to_monitors true 2026-03-21T14:43:15.236 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T14:43:15.234+0000 7fc238b97900 -1 osd.3 0 log_to_monitors true 2026-03-21T14:43:15.252 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T14:43:15.250+0000 7fc72c012900 -1 osd.2 0 log_to_monitors true 2026-03-21T14:43:15.407 DEBUG:teuthology.orchestra.run.vm01:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-03-21T14:43:15.526 INFO:teuthology.misc.health.vm01.stdout: 2026-03-21T14:43:15.526 INFO:teuthology.misc.health.vm01.stdout:{"epoch":6,"fsid":"b533d616-fa1d-488f-abe2-f7b7efba8c44","created":"2026-03-21T14:43:13.834429+0000","modified":"2026-03-21T14:43:14.799411+0000","last_up_change":"0.000000","last_in_change":"2026-03-21T14:43:14.799411+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":4,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"6dd81534-8c99-46c7-bbfb-362bd5315e72","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"f9879bc9-c485-4448-946b-608bd3c5f1b6","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"7ecdf495-8d95-49ec-be60-da5e00cecd99","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":3,"uuid":"d0911d39-8504-4ebe-9bb2-cdd1b7decec2","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-21T14:43:15.535 DEBUG:teuthology.misc:0 of 4 OSDs are up 2026-03-21T14:43:16.854 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T14:43:16.852+0000 7fc727f9d640 -1 osd.2 0 waiting for initial osdmap 2026-03-21T14:43:16.854 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T14:43:16.852+0000 7f7541b64640 -1 osd.0 0 waiting for initial osdmap 2026-03-21T14:43:16.854 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T14:43:16.852+0000 7fc234b26640 -1 osd.3 0 waiting for initial osdmap 2026-03-21T14:43:16.855 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T14:43:16.853+0000 7f91f4b96640 -1 osd.1 0 waiting for initial osdmap 2026-03-21T14:43:16.871 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T14:43:16.869+0000 7fc722590640 -1 osd.2 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T14:43:16.872 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T14:43:16.870+0000 7fc22f92b640 -1 osd.3 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T14:43:16.872 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T14:43:16.870+0000 7f753c157640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T14:43:16.872 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T14:43:16.870+0000 7f91ef99b640 -1 osd.1 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T14:43:17.282 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T14:43:17.280+0000 7f7c7ea14640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-21T14:43:21.938 DEBUG:teuthology.orchestra.run.vm01:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-03-21T14:43:22.144 INFO:teuthology.misc.health.vm01.stdout: 2026-03-21T14:43:22.144 INFO:teuthology.misc.health.vm01.stdout:{"epoch":12,"fsid":"b533d616-fa1d-488f-abe2-f7b7efba8c44","created":"2026-03-21T14:43:13.834429+0000","modified":"2026-03-21T14:43:20.924833+0000","last_up_change":"2026-03-21T14:43:17.855609+0000","last_in_change":"2026-03-21T14:43:14.799411+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":4,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-21T14:43:18.282953+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"12","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":4,"score_stable":4,"optimal_score":0.5,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"6dd81534-8c99-46c7-bbfb-362bd5315e72","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3374666381},{"type":"v1","addr":"192.168.123.101:6809","nonce":3374666381}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6810","nonce":3374666381},{"type":"v1","addr":"192.168.123.101:6811","nonce":3374666381}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6814","nonce":3374666381},{"type":"v1","addr":"192.168.123.101:6815","nonce":3374666381}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6812","nonce":3374666381},{"type":"v1","addr":"192.168.123.101:6813","nonce":3374666381}]},"public_addr":"192.168.123.101:6809/3374666381","cluster_addr":"192.168.123.101:6811/3374666381","heartbeat_back_addr":"192.168.123.101:6815/3374666381","heartbeat_front_addr":"192.168.123.101:6813/3374666381","state":["exists","up"]},{"osd":1,"uuid":"f9879bc9-c485-4448-946b-608bd3c5f1b6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6800","nonce":314076420},{"type":"v1","addr":"192.168.123.101:6801","nonce":314076420}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":314076420},{"type":"v1","addr":"192.168.123.101:6803","nonce":314076420}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":314076420},{"type":"v1","addr":"192.168.123.101:6807","nonce":314076420}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":314076420},{"type":"v1","addr":"192.168.123.101:6805","nonce":314076420}]},"public_addr":"192.168.123.101:6801/314076420","cluster_addr":"192.168.123.101:6803/314076420","heartbeat_back_addr":"192.168.123.101:6807/314076420","heartbeat_front_addr":"192.168.123.101:6805/314076420","state":["exists","up"]},{"osd":2,"uuid":"7ecdf495-8d95-49ec-be60-da5e00cecd99","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":2201310623},{"type":"v1","addr":"192.168.123.105:6809","nonce":2201310623}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":2201310623},{"type":"v1","addr":"192.168.123.105:6811","nonce":2201310623}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":2201310623},{"type":"v1","addr":"192.168.123.105:6815","nonce":2201310623}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":2201310623},{"type":"v1","addr":"192.168.123.105:6813","nonce":2201310623}]},"public_addr":"192.168.123.105:6809/2201310623","cluster_addr":"192.168.123.105:6811/2201310623","heartbeat_back_addr":"192.168.123.105:6815/2201310623","heartbeat_front_addr":"192.168.123.105:6813/2201310623","state":["exists","up"]},{"osd":3,"uuid":"d0911d39-8504-4ebe-9bb2-cdd1b7decec2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":10,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":1399269682},{"type":"v1","addr":"192.168.123.105:6801","nonce":1399269682}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":1399269682},{"type":"v1","addr":"192.168.123.105:6803","nonce":1399269682}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":1399269682},{"type":"v1","addr":"192.168.123.105:6807","nonce":1399269682}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":1399269682},{"type":"v1","addr":"192.168.123.105:6805","nonce":1399269682}]},"public_addr":"192.168.123.105:6801/1399269682","cluster_addr":"192.168.123.105:6803/1399269682","heartbeat_back_addr":"192.168.123.105:6807/1399269682","heartbeat_front_addr":"192.168.123.105:6805/1399269682","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-21T14:43:22.155 DEBUG:teuthology.misc:4 of 4 OSDs are up 2026-03-21T14:43:22.155 INFO:tasks.ceph:Creating RBD pool 2026-03-21T14:43:22.155 DEBUG:teuthology.orchestra.run.vm01:> sudo ceph --cluster ceph osd pool create rbd 8 2026-03-21T14:43:22.949 INFO:teuthology.orchestra.run.vm01.stderr:pool 'rbd' created 2026-03-21T14:43:22.963 DEBUG:teuthology.orchestra.run.vm01:> rbd --cluster ceph pool init rbd 2026-03-21T14:43:25.973 INFO:tasks.ceph:Starting mds daemons in cluster ceph... 2026-03-21T14:43:25.974 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph config log 1 --format=json 2026-03-21T14:43:25.974 INFO:tasks.daemonwatchdog.daemon_watchdog:watchdog starting 2026-03-21T14:43:26.184 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:26.195 INFO:teuthology.orchestra.run.vm01.stdout:[{"version":1,"timestamp":"0.000000","name":"","changes":[]}] 2026-03-21T14:43:26.195 INFO:tasks.ceph_manager:config epoch is 1 2026-03-21T14:43:26.195 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-21T14:43:26.195 INFO:tasks.ceph.ceph_manager.ceph:waiting for mgr available 2026-03-21T14:43:26.195 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph mgr dump --format=json 2026-03-21T14:43:26.430 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:26.442 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":5,"flags":0,"active_gid":4106,"active_name":"x","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6816","nonce":812279671},{"type":"v1","addr":"192.168.123.101:6817","nonce":812279671}]},"active_addr":"192.168.123.101:6817/812279671","active_change":"2026-03-21T14:43:16.263639+0000","active_mgr_features":4544132024016699391,"available":true,"standbys":[{"gid":4101,"name":"y","mgr_features":4544132024016699391,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to, use commas to separate multiple","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"certificate_automated_rotation_enabled":{"name":"certificate_automated_rotation_enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"This flag controls whether cephadm automatically rotates certificates upon expiration.","long_desc":"","tags":[],"see_also":[]},"certificate_check_debug_mode":{"name":"certificate_check_debug_mode","type":"bool","level":"dev","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"FOR TESTING ONLY: This flag forces the certificate check instead of waiting for certificate_check_period.","long_desc":"","tags":[],"see_also":[]},"certificate_check_period":{"name":"certificate_check_period","type":"int","level":"advanced","flags":0,"default_value":"1","min":"0","max":"30","enum_allowed":[],"desc":"Specifies how often (in days) the certificate should be checked for validity.","long_desc":"","tags":[],"see_also":[]},"certificate_duration_days":{"name":"certificate_duration_days","type":"int","level":"advanced","flags":0,"default_value":"1095","min":"90","max":"3650","enum_allowed":[],"desc":"Specifies the duration of self certificates generated and signed by cephadm root CA","long_desc":"","tags":[],"see_also":[]},"certificate_renewal_threshold_days":{"name":"certificate_renewal_threshold_days","type":"int","level":"advanced","flags":0,"default_value":"30","min":"10","max":"90","enum_allowed":[],"desc":"Specifies the lead time in days to initiate certificate renewal before expiration.","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.28.1","min":"","max":"","enum_allowed":[],"desc":"Alertmanager container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"Elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:12.3.1","min":"","max":"","enum_allowed":[],"desc":"Grafana container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"Haproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_nginx":{"name":"container_image_nginx","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nginx:sclorg-nginx-126","min":"","max":"","enum_allowed":[],"desc":"Nginx container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.9.1","min":"","max":"","enum_allowed":[],"desc":"Node exporter container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.5","min":"","max":"","enum_allowed":[],"desc":"Nvmeof container image","long_desc":"","tags":[],"see_also":[]},"container_image_oauth2_proxy":{"name":"container_image_oauth2_proxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/oauth2-proxy/oauth2-proxy:v7.6.0","min":"","max":"","enum_allowed":[],"desc":"Oauth2 proxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v3.6.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba_metrics":{"name":"container_image_samba_metrics","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-metrics:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba metrics container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"Snmp gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"stray_daemon_check_interval":{"name":"stray_daemon_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"how frequently cephadm should check for the presence of stray daemons","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MANAGED_BY_CLUSTERS":{"name":"MANAGED_BY_CLUSTERS","type":"str","level":"advanced","flags":0,"default_value":"[]","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MULTICLUSTER_CONFIG":{"name":"MULTICLUSTER_CONFIG","type":"str","level":"advanced","flags":0,"default_value":"{}","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROM_ALERT_CREDENTIAL_CACHE_TTL":{"name":"PROM_ALERT_CREDENTIAL_CACHE_TTL","type":"int","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_HOSTNAME_PER_DAEMON":{"name":"RGW_HOSTNAME_PER_DAEMON","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crypto_caller":{"name":"crypto_caller","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sso_oauth2":{"name":"sso_oauth2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"prometheus_tls_secret_name":{"name":"prometheus_tls_secret_name","type":"str","level":"advanced","flags":0,"default_value":"rook-ceph-prometheus-server-tls","min":"","max":"","enum_allowed":[],"desc":"name of tls secret in k8s for prometheus","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"smb","can_run":true,"error_string":"","module_options":{"internal_store_backend":{"name":"internal_store_backend","type":"str","level":"dev","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"set internal store backend. for develoment and testing only","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_orchestration":{"name":"update_orchestration","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically update orchestration when smb resources are changed","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_cloning":{"name":"pause_cloning","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_purging":{"name":"pause_purging","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous subvolume purge threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["iostat","nfs"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to, use commas to separate multiple","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"certificate_automated_rotation_enabled":{"name":"certificate_automated_rotation_enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"This flag controls whether cephadm automatically rotates certificates upon expiration.","long_desc":"","tags":[],"see_also":[]},"certificate_check_debug_mode":{"name":"certificate_check_debug_mode","type":"bool","level":"dev","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"FOR TESTING ONLY: This flag forces the certificate check instead of waiting for certificate_check_period.","long_desc":"","tags":[],"see_also":[]},"certificate_check_period":{"name":"certificate_check_period","type":"int","level":"advanced","flags":0,"default_value":"1","min":"0","max":"30","enum_allowed":[],"desc":"Specifies how often (in days) the certificate should be checked for validity.","long_desc":"","tags":[],"see_also":[]},"certificate_duration_days":{"name":"certificate_duration_days","type":"int","level":"advanced","flags":0,"default_value":"1095","min":"90","max":"3650","enum_allowed":[],"desc":"Specifies the duration of self certificates generated and signed by cephadm root CA","long_desc":"","tags":[],"see_also":[]},"certificate_renewal_threshold_days":{"name":"certificate_renewal_threshold_days","type":"int","level":"advanced","flags":0,"default_value":"30","min":"10","max":"90","enum_allowed":[],"desc":"Specifies the lead time in days to initiate certificate renewal before expiration.","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.28.1","min":"","max":"","enum_allowed":[],"desc":"Alertmanager container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"Elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:12.3.1","min":"","max":"","enum_allowed":[],"desc":"Grafana container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"Haproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_nginx":{"name":"container_image_nginx","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nginx:sclorg-nginx-126","min":"","max":"","enum_allowed":[],"desc":"Nginx container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.9.1","min":"","max":"","enum_allowed":[],"desc":"Node exporter container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.5","min":"","max":"","enum_allowed":[],"desc":"Nvmeof container image","long_desc":"","tags":[],"see_also":[]},"container_image_oauth2_proxy":{"name":"container_image_oauth2_proxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/oauth2-proxy/oauth2-proxy:v7.6.0","min":"","max":"","enum_allowed":[],"desc":"Oauth2 proxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v3.6.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba_metrics":{"name":"container_image_samba_metrics","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-metrics:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba metrics container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"Snmp gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"stray_daemon_check_interval":{"name":"stray_daemon_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"how frequently cephadm should check for the presence of stray daemons","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MANAGED_BY_CLUSTERS":{"name":"MANAGED_BY_CLUSTERS","type":"str","level":"advanced","flags":0,"default_value":"[]","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MULTICLUSTER_CONFIG":{"name":"MULTICLUSTER_CONFIG","type":"str","level":"advanced","flags":0,"default_value":"{}","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROM_ALERT_CREDENTIAL_CACHE_TTL":{"name":"PROM_ALERT_CREDENTIAL_CACHE_TTL","type":"int","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_HOSTNAME_PER_DAEMON":{"name":"RGW_HOSTNAME_PER_DAEMON","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crypto_caller":{"name":"crypto_caller","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sso_oauth2":{"name":"sso_oauth2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"prometheus_tls_secret_name":{"name":"prometheus_tls_secret_name","type":"str","level":"advanced","flags":0,"default_value":"rook-ceph-prometheus-server-tls","min":"","max":"","enum_allowed":[],"desc":"name of tls secret in k8s for prometheus","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"smb","can_run":true,"error_string":"","module_options":{"internal_store_backend":{"name":"internal_store_backend","type":"str","level":"dev","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"set internal store backend. for develoment and testing only","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_orchestration":{"name":"update_orchestration","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically update orchestration when smb resources are changed","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_cloning":{"name":"pause_cloning","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_purging":{"name":"pause_purging","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous subvolume purge threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"tentacle":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":0,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":4192047608}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":1648366191}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":1167248667}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":118354202}]}]} 2026-03-21T14:43:26.444 INFO:tasks.ceph.ceph_manager.ceph:mgr available! 2026-03-21T14:43:26.444 INFO:tasks.ceph.ceph_manager.ceph:waiting for all up 2026-03-21T14:43:26.444 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-21T14:43:26.645 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:26.645 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":16,"fsid":"b533d616-fa1d-488f-abe2-f7b7efba8c44","created":"2026-03-21T14:43:13.834429+0000","modified":"2026-03-21T14:43:25.958989+0000","last_up_change":"2026-03-21T14:43:17.855609+0000","last_in_change":"2026-03-21T14:43:14.799411+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":4,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-21T14:43:18.282953+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"12","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":4,"score_stable":4,"optimal_score":0.5,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"rbd","create_time":"2026-03-21T14:43:22.363681+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"16","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":16,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.5,"score_stable":2.5,"optimal_score":1,"raw_score_acting":2.5,"raw_score_stable":2.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"6dd81534-8c99-46c7-bbfb-362bd5315e72","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3374666381},{"type":"v1","addr":"192.168.123.101:6809","nonce":3374666381}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6810","nonce":3374666381},{"type":"v1","addr":"192.168.123.101:6811","nonce":3374666381}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6814","nonce":3374666381},{"type":"v1","addr":"192.168.123.101:6815","nonce":3374666381}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6812","nonce":3374666381},{"type":"v1","addr":"192.168.123.101:6813","nonce":3374666381}]},"public_addr":"192.168.123.101:6809/3374666381","cluster_addr":"192.168.123.101:6811/3374666381","heartbeat_back_addr":"192.168.123.101:6815/3374666381","heartbeat_front_addr":"192.168.123.101:6813/3374666381","state":["exists","up"]},{"osd":1,"uuid":"f9879bc9-c485-4448-946b-608bd3c5f1b6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":13,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6800","nonce":314076420},{"type":"v1","addr":"192.168.123.101:6801","nonce":314076420}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":314076420},{"type":"v1","addr":"192.168.123.101:6803","nonce":314076420}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":314076420},{"type":"v1","addr":"192.168.123.101:6807","nonce":314076420}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":314076420},{"type":"v1","addr":"192.168.123.101:6805","nonce":314076420}]},"public_addr":"192.168.123.101:6801/314076420","cluster_addr":"192.168.123.101:6803/314076420","heartbeat_back_addr":"192.168.123.101:6807/314076420","heartbeat_front_addr":"192.168.123.101:6805/314076420","state":["exists","up"]},{"osd":2,"uuid":"7ecdf495-8d95-49ec-be60-da5e00cecd99","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":2201310623},{"type":"v1","addr":"192.168.123.105:6809","nonce":2201310623}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":2201310623},{"type":"v1","addr":"192.168.123.105:6811","nonce":2201310623}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":2201310623},{"type":"v1","addr":"192.168.123.105:6815","nonce":2201310623}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":2201310623},{"type":"v1","addr":"192.168.123.105:6813","nonce":2201310623}]},"public_addr":"192.168.123.105:6809/2201310623","cluster_addr":"192.168.123.105:6811/2201310623","heartbeat_back_addr":"192.168.123.105:6815/2201310623","heartbeat_front_addr":"192.168.123.105:6813/2201310623","state":["exists","up"]},{"osd":3,"uuid":"d0911d39-8504-4ebe-9bb2-cdd1b7decec2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":13,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":1399269682},{"type":"v1","addr":"192.168.123.105:6801","nonce":1399269682}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":1399269682},{"type":"v1","addr":"192.168.123.105:6803","nonce":1399269682}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":1399269682},{"type":"v1","addr":"192.168.123.105:6807","nonce":1399269682}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":1399269682},{"type":"v1","addr":"192.168.123.105:6805","nonce":1399269682}]},"public_addr":"192.168.123.105:6801/1399269682","cluster_addr":"192.168.123.105:6803/1399269682","heartbeat_back_addr":"192.168.123.105:6807/1399269682","heartbeat_front_addr":"192.168.123.105:6805/1399269682","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-21T14:43:26.656 INFO:tasks.ceph.ceph_manager.ceph:all up! 2026-03-21T14:43:26.656 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-21T14:43:26.854 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:26.854 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":16,"fsid":"b533d616-fa1d-488f-abe2-f7b7efba8c44","created":"2026-03-21T14:43:13.834429+0000","modified":"2026-03-21T14:43:25.958989+0000","last_up_change":"2026-03-21T14:43:17.855609+0000","last_in_change":"2026-03-21T14:43:14.799411+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":4,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-21T14:43:18.282953+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"12","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":4,"score_stable":4,"optimal_score":0.5,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"rbd","create_time":"2026-03-21T14:43:22.363681+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"16","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":16,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.5,"score_stable":2.5,"optimal_score":1,"raw_score_acting":2.5,"raw_score_stable":2.5,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"6dd81534-8c99-46c7-bbfb-362bd5315e72","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":3374666381},{"type":"v1","addr":"192.168.123.101:6809","nonce":3374666381}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6810","nonce":3374666381},{"type":"v1","addr":"192.168.123.101:6811","nonce":3374666381}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6814","nonce":3374666381},{"type":"v1","addr":"192.168.123.101:6815","nonce":3374666381}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6812","nonce":3374666381},{"type":"v1","addr":"192.168.123.101:6813","nonce":3374666381}]},"public_addr":"192.168.123.101:6809/3374666381","cluster_addr":"192.168.123.101:6811/3374666381","heartbeat_back_addr":"192.168.123.101:6815/3374666381","heartbeat_front_addr":"192.168.123.101:6813/3374666381","state":["exists","up"]},{"osd":1,"uuid":"f9879bc9-c485-4448-946b-608bd3c5f1b6","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":13,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6800","nonce":314076420},{"type":"v1","addr":"192.168.123.101:6801","nonce":314076420}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":314076420},{"type":"v1","addr":"192.168.123.101:6803","nonce":314076420}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":314076420},{"type":"v1","addr":"192.168.123.101:6807","nonce":314076420}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":314076420},{"type":"v1","addr":"192.168.123.101:6805","nonce":314076420}]},"public_addr":"192.168.123.101:6801/314076420","cluster_addr":"192.168.123.101:6803/314076420","heartbeat_back_addr":"192.168.123.101:6807/314076420","heartbeat_front_addr":"192.168.123.101:6805/314076420","state":["exists","up"]},{"osd":2,"uuid":"7ecdf495-8d95-49ec-be60-da5e00cecd99","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6808","nonce":2201310623},{"type":"v1","addr":"192.168.123.105:6809","nonce":2201310623}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6810","nonce":2201310623},{"type":"v1","addr":"192.168.123.105:6811","nonce":2201310623}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6814","nonce":2201310623},{"type":"v1","addr":"192.168.123.105:6815","nonce":2201310623}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6812","nonce":2201310623},{"type":"v1","addr":"192.168.123.105:6813","nonce":2201310623}]},"public_addr":"192.168.123.105:6809/2201310623","cluster_addr":"192.168.123.105:6811/2201310623","heartbeat_back_addr":"192.168.123.105:6815/2201310623","heartbeat_front_addr":"192.168.123.105:6813/2201310623","state":["exists","up"]},{"osd":3,"uuid":"d0911d39-8504-4ebe-9bb2-cdd1b7decec2","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":9,"up_thru":13,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6800","nonce":1399269682},{"type":"v1","addr":"192.168.123.105:6801","nonce":1399269682}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6802","nonce":1399269682},{"type":"v1","addr":"192.168.123.105:6803","nonce":1399269682}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6806","nonce":1399269682},{"type":"v1","addr":"192.168.123.105:6807","nonce":1399269682}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.105:6804","nonce":1399269682},{"type":"v1","addr":"192.168.123.105:6805","nonce":1399269682}]},"public_addr":"192.168.123.105:6801/1399269682","cluster_addr":"192.168.123.105:6803/1399269682","heartbeat_back_addr":"192.168.123.105:6807/1399269682","heartbeat_front_addr":"192.168.123.105:6805/1399269682","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":3,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-21T14:43:26.866 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 flush_pg_stats 2026-03-21T14:43:26.866 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 flush_pg_stats 2026-03-21T14:43:26.866 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.2 flush_pg_stats 2026-03-21T14:43:26.866 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.3 flush_pg_stats 2026-03-21T14:43:26.981 INFO:teuthology.orchestra.run.vm01.stdout:38654705667 2026-03-21T14:43:26.981 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-21T14:43:26.987 INFO:teuthology.orchestra.run.vm01.stdout:38654705668 2026-03-21T14:43:26.987 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-21T14:43:26.990 INFO:teuthology.orchestra.run.vm01.stdout:38654705667 2026-03-21T14:43:26.990 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.3 2026-03-21T14:43:26.995 INFO:teuthology.orchestra.run.vm01.stdout:38654705667 2026-03-21T14:43:26.995 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-21T14:43:27.200 INFO:teuthology.orchestra.run.vm01.stdout:38654705666 2026-03-21T14:43:27.203 INFO:teuthology.orchestra.run.vm01.stdout:38654705666 2026-03-21T14:43:27.214 INFO:tasks.ceph.ceph_manager.ceph:need seq 38654705667 got 38654705666 for osd.0 2026-03-21T14:43:27.214 INFO:teuthology.orchestra.run.vm01.stdout:38654705666 2026-03-21T14:43:27.219 INFO:tasks.ceph.ceph_manager.ceph:need seq 38654705667 got 38654705666 for osd.3 2026-03-21T14:43:27.227 INFO:tasks.ceph.ceph_manager.ceph:need seq 38654705667 got 38654705666 for osd.1 2026-03-21T14:43:27.241 INFO:teuthology.orchestra.run.vm01.stdout:38654705667 2026-03-21T14:43:27.252 INFO:tasks.ceph.ceph_manager.ceph:need seq 38654705668 got 38654705667 for osd.2 2026-03-21T14:43:28.215 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-21T14:43:28.220 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.3 2026-03-21T14:43:28.227 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-21T14:43:28.253 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-21T14:43:28.464 INFO:teuthology.orchestra.run.vm01.stdout:38654705666 2026-03-21T14:43:28.480 INFO:tasks.ceph.ceph_manager.ceph:need seq 38654705667 got 38654705666 for osd.0 2026-03-21T14:43:28.493 INFO:teuthology.orchestra.run.vm01.stdout:38654705667 2026-03-21T14:43:28.493 INFO:teuthology.orchestra.run.vm01.stdout:38654705666 2026-03-21T14:43:28.493 INFO:teuthology.orchestra.run.vm01.stdout:38654705666 2026-03-21T14:43:28.506 INFO:tasks.ceph.ceph_manager.ceph:need seq 38654705667 got 38654705666 for osd.1 2026-03-21T14:43:28.506 INFO:tasks.ceph.ceph_manager.ceph:need seq 38654705667 got 38654705666 for osd.3 2026-03-21T14:43:28.508 INFO:tasks.ceph.ceph_manager.ceph:need seq 38654705668 got 38654705667 for osd.2 2026-03-21T14:43:29.480 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-21T14:43:29.507 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-21T14:43:29.508 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.3 2026-03-21T14:43:29.508 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-21T14:43:29.725 INFO:teuthology.orchestra.run.vm01.stdout:38654705667 2026-03-21T14:43:29.739 INFO:tasks.ceph.ceph_manager.ceph:need seq 38654705667 got 38654705667 for osd.0 2026-03-21T14:43:29.739 DEBUG:teuthology.parallel:result is None 2026-03-21T14:43:29.748 INFO:teuthology.orchestra.run.vm01.stdout:38654705668 2026-03-21T14:43:29.760 INFO:tasks.ceph.ceph_manager.ceph:need seq 38654705668 got 38654705668 for osd.2 2026-03-21T14:43:29.760 DEBUG:teuthology.parallel:result is None 2026-03-21T14:43:29.761 INFO:teuthology.orchestra.run.vm01.stdout:38654705667 2026-03-21T14:43:29.764 INFO:teuthology.orchestra.run.vm01.stdout:38654705667 2026-03-21T14:43:29.772 INFO:tasks.ceph.ceph_manager.ceph:need seq 38654705667 got 38654705667 for osd.3 2026-03-21T14:43:29.772 DEBUG:teuthology.parallel:result is None 2026-03-21T14:43:29.779 INFO:tasks.ceph.ceph_manager.ceph:need seq 38654705667 got 38654705667 for osd.1 2026-03-21T14:43:29.779 DEBUG:teuthology.parallel:result is None 2026-03-21T14:43:29.779 INFO:tasks.ceph.ceph_manager.ceph:waiting for clean 2026-03-21T14:43:29.779 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-21T14:43:30.021 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:30.021 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-21T14:43:30.031 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":17,"stamp":"2026-03-21T14:43:28.272158+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":590387,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":86,"num_read_kb":73,"num_write":147,"num_write_kb":2674,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":115,"ondisk_log_size":115,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":18,"num_osds":4,"num_per_pool_osds":4,"num_per_pool_omap_osds":4,"kb":419430400,"kb_used":109112,"kb_used_data":1704,"kb_used_omap":33,"kb_used_meta":107230,"kb_avail":419321288,"statfs":{"total":429496729600,"available":429384998912,"internally_reserved":0,"allocated":1744896,"data_stored":1355498,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":34456,"internal_metadata":109803880},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"4.321297"},"pg_stats":[{"pgid":"2.7","version":"0'0","reported_seq":20,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.967381+0000","last_change":"2026-03-21T14:43:25.967452+0000","last_active":"2026-03-21T14:43:25.967381+0000","last_peered":"2026-03-21T14:43:25.967381+0000","last_clean":"2026-03-21T14:43:25.967381+0000","last_became_active":"2026-03-21T14:43:23.955406+0000","last_became_peered":"2026-03-21T14:43:23.955406+0000","last_unstale":"2026-03-21T14:43:25.967381+0000","last_undegraded":"2026-03-21T14:43:25.967381+0000","last_fullsized":"2026-03-21T14:43:25.967381+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T15:47:15.592219+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00027121699999999998,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,2],"acting":[3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.6","version":"0'0","reported_seq":20,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.965324+0000","last_change":"2026-03-21T14:43:25.965423+0000","last_active":"2026-03-21T14:43:25.965324+0000","last_peered":"2026-03-21T14:43:25.965324+0000","last_clean":"2026-03-21T14:43:25.965324+0000","last_became_active":"2026-03-21T14:43:23.954653+0000","last_became_peered":"2026-03-21T14:43:23.954653+0000","last_unstale":"2026-03-21T14:43:25.965324+0000","last_undegraded":"2026-03-21T14:43:25.965324+0000","last_fullsized":"2026-03-21T14:43:25.965324+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T20:02:00.844824+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00035453399999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2],"acting":[1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.5","version":"0'0","reported_seq":20,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.967394+0000","last_change":"2026-03-21T14:43:25.967481+0000","last_active":"2026-03-21T14:43:25.967394+0000","last_peered":"2026-03-21T14:43:25.967394+0000","last_clean":"2026-03-21T14:43:25.967394+0000","last_became_active":"2026-03-21T14:43:23.956329+0000","last_became_peered":"2026-03-21T14:43:23.956329+0000","last_unstale":"2026-03-21T14:43:25.967394+0000","last_undegraded":"2026-03-21T14:43:25.967394+0000","last_fullsized":"2026-03-21T14:43:25.967394+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T15:00:52.660904+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00026579799999999998,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0],"acting":[3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.4","version":"0'0","reported_seq":20,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.965222+0000","last_change":"2026-03-21T14:43:25.965303+0000","last_active":"2026-03-21T14:43:25.965222+0000","last_peered":"2026-03-21T14:43:25.965222+0000","last_clean":"2026-03-21T14:43:25.965222+0000","last_became_active":"2026-03-21T14:43:23.955496+0000","last_became_peered":"2026-03-21T14:43:23.955496+0000","last_unstale":"2026-03-21T14:43:25.965222+0000","last_undegraded":"2026-03-21T14:43:25.965222+0000","last_fullsized":"2026-03-21T14:43:25.965222+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T18:36:29.031861+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00022928,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.2","version":"16'2","reported_seq":22,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.967653+0000","last_change":"2026-03-21T14:43:25.967653+0000","last_active":"2026-03-21T14:43:25.967653+0000","last_peered":"2026-03-21T14:43:25.967653+0000","last_clean":"2026-03-21T14:43:25.967653+0000","last_became_active":"2026-03-21T14:43:23.956027+0000","last_became_peered":"2026-03-21T14:43:23.956027+0000","last_unstale":"2026-03-21T14:43:25.967653+0000","last_undegraded":"2026-03-21T14:43:25.967653+0000","last_fullsized":"2026-03-21T14:43:25.967653+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T17:49:17.393446+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00047242399999999999,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1],"acting":[3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.1","version":"0'0","reported_seq":20,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.965238+0000","last_change":"2026-03-21T14:43:25.965344+0000","last_active":"2026-03-21T14:43:25.965238+0000","last_peered":"2026-03-21T14:43:25.965238+0000","last_clean":"2026-03-21T14:43:25.965238+0000","last_became_active":"2026-03-21T14:43:23.955036+0000","last_became_peered":"2026-03-21T14:43:23.955036+0000","last_unstale":"2026-03-21T14:43:25.965238+0000","last_undegraded":"2026-03-21T14:43:25.965238+0000","last_fullsized":"2026-03-21T14:43:25.965238+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-23T02:01:38.014341+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000228619,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"0'0","reported_seq":20,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.967471+0000","last_change":"2026-03-21T14:43:25.967519+0000","last_active":"2026-03-21T14:43:25.967471+0000","last_peered":"2026-03-21T14:43:25.967471+0000","last_clean":"2026-03-21T14:43:25.967471+0000","last_became_active":"2026-03-21T14:43:23.955458+0000","last_became_peered":"2026-03-21T14:43:23.955458+0000","last_unstale":"2026-03-21T14:43:25.967471+0000","last_undegraded":"2026-03-21T14:43:25.967471+0000","last_fullsized":"2026-03-21T14:43:25.967471+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T20:23:00.621496+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00029983100000000002,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1],"acting":[3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.3","version":"14'1","reported_seq":21,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.967623+0000","last_change":"2026-03-21T14:43:25.967773+0000","last_active":"2026-03-21T14:43:25.967623+0000","last_peered":"2026-03-21T14:43:25.967623+0000","last_clean":"2026-03-21T14:43:25.967623+0000","last_became_active":"2026-03-21T14:43:23.954462+0000","last_became_peered":"2026-03-21T14:43:23.954462+0000","last_unstale":"2026-03-21T14:43:25.967623+0000","last_undegraded":"2026-03-21T14:43:25.967623+0000","last_fullsized":"2026-03-21T14:43:25.967623+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T19:35:29.960976+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00048056800000000001,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,2],"acting":[3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"11'112","reported_seq":157,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.967269+0000","last_change":"2026-03-21T14:43:20.134759+0000","last_active":"2026-03-21T14:43:25.967269+0000","last_peered":"2026-03-21T14:43:25.967269+0000","last_clean":"2026-03-21T14:43:25.967269+0000","last_became_active":"2026-03-21T14:43:20.134081+0000","last_became_peered":"2026-03-21T14:43:20.134081+0000","last_unstale":"2026-03-21T14:43:25.967269+0000","last_undegraded":"2026-03-21T14:43:25.967269+0000","last_fullsized":"2026-03-21T14:43:25.967269+0000","mapping_epoch":10,"log_start":"0'0","ondisk_log_start":"0'0","created":10,"last_epoch_clean":11,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:18.903946+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:18.903946+0000","last_clean_scrub_stamp":"2026-03-21T14:43:18.903946+0000","objects_scrubbed":0,"log_size":112,"log_dups_size":0,"ondisk_log_size":112,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-23T01:14:00.027217+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":86,"num_read_kb":73,"num_write":145,"num_write_kb":2672,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0],"acting":[3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":4},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":86,"num_read_kb":73,"num_write":145,"num_write_kb":2672,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1187840,"data_stored":1180736,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":112,"ondisk_log_size":112,"up":2,"acting":2,"num_store_stats":2}],"osd_stats":[{"osd":3,"up_from":9,"seq":38654705667,"num_pgs":6,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":27568,"kb_used_data":720,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":104830032,"statfs":{"total":107374182400,"available":107345952768,"internally_reserved":0,"allocated":737280,"data_stored":635090,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8775,"internal_metadata":27450809},"hb_peers":[0,1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":9,"seq":38654705668,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":26984,"kb_used_data":136,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":104830616,"statfs":{"total":107374182400,"available":107346550784,"internally_reserved":0,"allocated":139264,"data_stored":44703,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[0,1,3],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":9,"seq":38654705667,"num_pgs":5,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":26988,"kb_used_data":140,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":104830612,"statfs":{"total":107374182400,"available":107346546688,"internally_reserved":0,"allocated":143360,"data_stored":44722,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[0,2,3],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":9,"seq":38654705667,"num_pgs":4,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":27572,"kb_used_data":708,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":104830028,"statfs":{"total":107374182400,"available":107345948672,"internally_reserved":0,"allocated":724992,"data_stored":630983,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8127,"internal_metadata":27451457},"hb_peers":[1,2,3],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-21T14:43:30.032 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-21T14:43:30.241 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:30.242 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-21T14:43:30.255 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":17,"stamp":"2026-03-21T14:43:28.272158+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":590387,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":86,"num_read_kb":73,"num_write":147,"num_write_kb":2674,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":115,"ondisk_log_size":115,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":18,"num_osds":4,"num_per_pool_osds":4,"num_per_pool_omap_osds":4,"kb":419430400,"kb_used":109112,"kb_used_data":1704,"kb_used_omap":33,"kb_used_meta":107230,"kb_avail":419321288,"statfs":{"total":429496729600,"available":429384998912,"internally_reserved":0,"allocated":1744896,"data_stored":1355498,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":34456,"internal_metadata":109803880},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"4.321297"},"pg_stats":[{"pgid":"2.7","version":"0'0","reported_seq":20,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.967381+0000","last_change":"2026-03-21T14:43:25.967452+0000","last_active":"2026-03-21T14:43:25.967381+0000","last_peered":"2026-03-21T14:43:25.967381+0000","last_clean":"2026-03-21T14:43:25.967381+0000","last_became_active":"2026-03-21T14:43:23.955406+0000","last_became_peered":"2026-03-21T14:43:23.955406+0000","last_unstale":"2026-03-21T14:43:25.967381+0000","last_undegraded":"2026-03-21T14:43:25.967381+0000","last_fullsized":"2026-03-21T14:43:25.967381+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T15:47:15.592219+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00027121699999999998,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,2],"acting":[3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.6","version":"0'0","reported_seq":20,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.965324+0000","last_change":"2026-03-21T14:43:25.965423+0000","last_active":"2026-03-21T14:43:25.965324+0000","last_peered":"2026-03-21T14:43:25.965324+0000","last_clean":"2026-03-21T14:43:25.965324+0000","last_became_active":"2026-03-21T14:43:23.954653+0000","last_became_peered":"2026-03-21T14:43:23.954653+0000","last_unstale":"2026-03-21T14:43:25.965324+0000","last_undegraded":"2026-03-21T14:43:25.965324+0000","last_fullsized":"2026-03-21T14:43:25.965324+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T20:02:00.844824+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00035453399999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2],"acting":[1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.5","version":"0'0","reported_seq":20,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.967394+0000","last_change":"2026-03-21T14:43:25.967481+0000","last_active":"2026-03-21T14:43:25.967394+0000","last_peered":"2026-03-21T14:43:25.967394+0000","last_clean":"2026-03-21T14:43:25.967394+0000","last_became_active":"2026-03-21T14:43:23.956329+0000","last_became_peered":"2026-03-21T14:43:23.956329+0000","last_unstale":"2026-03-21T14:43:25.967394+0000","last_undegraded":"2026-03-21T14:43:25.967394+0000","last_fullsized":"2026-03-21T14:43:25.967394+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T15:00:52.660904+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00026579799999999998,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0],"acting":[3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.4","version":"0'0","reported_seq":20,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.965222+0000","last_change":"2026-03-21T14:43:25.965303+0000","last_active":"2026-03-21T14:43:25.965222+0000","last_peered":"2026-03-21T14:43:25.965222+0000","last_clean":"2026-03-21T14:43:25.965222+0000","last_became_active":"2026-03-21T14:43:23.955496+0000","last_became_peered":"2026-03-21T14:43:23.955496+0000","last_unstale":"2026-03-21T14:43:25.965222+0000","last_undegraded":"2026-03-21T14:43:25.965222+0000","last_fullsized":"2026-03-21T14:43:25.965222+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T18:36:29.031861+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00022928,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.2","version":"16'2","reported_seq":22,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.967653+0000","last_change":"2026-03-21T14:43:25.967653+0000","last_active":"2026-03-21T14:43:25.967653+0000","last_peered":"2026-03-21T14:43:25.967653+0000","last_clean":"2026-03-21T14:43:25.967653+0000","last_became_active":"2026-03-21T14:43:23.956027+0000","last_became_peered":"2026-03-21T14:43:23.956027+0000","last_unstale":"2026-03-21T14:43:25.967653+0000","last_undegraded":"2026-03-21T14:43:25.967653+0000","last_fullsized":"2026-03-21T14:43:25.967653+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T17:49:17.393446+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00047242399999999999,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1],"acting":[3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.1","version":"0'0","reported_seq":20,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.965238+0000","last_change":"2026-03-21T14:43:25.965344+0000","last_active":"2026-03-21T14:43:25.965238+0000","last_peered":"2026-03-21T14:43:25.965238+0000","last_clean":"2026-03-21T14:43:25.965238+0000","last_became_active":"2026-03-21T14:43:23.955036+0000","last_became_peered":"2026-03-21T14:43:23.955036+0000","last_unstale":"2026-03-21T14:43:25.965238+0000","last_undegraded":"2026-03-21T14:43:25.965238+0000","last_fullsized":"2026-03-21T14:43:25.965238+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-23T02:01:38.014341+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000228619,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"0'0","reported_seq":20,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.967471+0000","last_change":"2026-03-21T14:43:25.967519+0000","last_active":"2026-03-21T14:43:25.967471+0000","last_peered":"2026-03-21T14:43:25.967471+0000","last_clean":"2026-03-21T14:43:25.967471+0000","last_became_active":"2026-03-21T14:43:23.955458+0000","last_became_peered":"2026-03-21T14:43:23.955458+0000","last_unstale":"2026-03-21T14:43:25.967471+0000","last_undegraded":"2026-03-21T14:43:25.967471+0000","last_fullsized":"2026-03-21T14:43:25.967471+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T20:23:00.621496+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00029983100000000002,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,1],"acting":[3,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.3","version":"14'1","reported_seq":21,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.967623+0000","last_change":"2026-03-21T14:43:25.967773+0000","last_active":"2026-03-21T14:43:25.967623+0000","last_peered":"2026-03-21T14:43:25.967623+0000","last_clean":"2026-03-21T14:43:25.967623+0000","last_became_active":"2026-03-21T14:43:23.954462+0000","last_became_peered":"2026-03-21T14:43:23.954462+0000","last_unstale":"2026-03-21T14:43:25.967623+0000","last_undegraded":"2026-03-21T14:43:25.967623+0000","last_fullsized":"2026-03-21T14:43:25.967623+0000","mapping_epoch":13,"log_start":"0'0","ondisk_log_start":"0'0","created":13,"last_epoch_clean":14,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:22.943241+0000","last_clean_scrub_stamp":"2026-03-21T14:43:22.943241+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T19:35:29.960976+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00048056800000000001,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,2],"acting":[3,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"11'112","reported_seq":157,"reported_epoch":16,"state":"active+clean","last_fresh":"2026-03-21T14:43:25.967269+0000","last_change":"2026-03-21T14:43:20.134759+0000","last_active":"2026-03-21T14:43:25.967269+0000","last_peered":"2026-03-21T14:43:25.967269+0000","last_clean":"2026-03-21T14:43:25.967269+0000","last_became_active":"2026-03-21T14:43:20.134081+0000","last_became_peered":"2026-03-21T14:43:20.134081+0000","last_unstale":"2026-03-21T14:43:25.967269+0000","last_undegraded":"2026-03-21T14:43:25.967269+0000","last_fullsized":"2026-03-21T14:43:25.967269+0000","mapping_epoch":10,"log_start":"0'0","ondisk_log_start":"0'0","created":10,"last_epoch_clean":11,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T14:43:18.903946+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T14:43:18.903946+0000","last_clean_scrub_stamp":"2026-03-21T14:43:18.903946+0000","objects_scrubbed":0,"log_size":112,"log_dups_size":0,"ondisk_log_size":112,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-23T01:14:00.027217+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":86,"num_read_kb":73,"num_write":145,"num_write_kb":2672,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[3,0],"acting":[3,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":3,"acting_primary":3,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":4},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":86,"num_read_kb":73,"num_write":145,"num_write_kb":2672,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1187840,"data_stored":1180736,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":112,"ondisk_log_size":112,"up":2,"acting":2,"num_store_stats":2}],"osd_stats":[{"osd":3,"up_from":9,"seq":38654705667,"num_pgs":6,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":27568,"kb_used_data":720,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":104830032,"statfs":{"total":107374182400,"available":107345952768,"internally_reserved":0,"allocated":737280,"data_stored":635090,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8775,"internal_metadata":27450809},"hb_peers":[0,1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":2,"up_from":9,"seq":38654705668,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":26984,"kb_used_data":136,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":104830616,"statfs":{"total":107374182400,"available":107346550784,"internally_reserved":0,"allocated":139264,"data_stored":44703,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[0,1,3],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":9,"seq":38654705667,"num_pgs":5,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":26988,"kb_used_data":140,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":104830612,"statfs":{"total":107374182400,"available":107346546688,"internally_reserved":0,"allocated":143360,"data_stored":44722,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[0,2,3],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":9,"seq":38654705667,"num_pgs":4,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":27572,"kb_used_data":708,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":104830028,"statfs":{"total":107374182400,"available":107345948672,"internally_reserved":0,"allocated":724992,"data_stored":630983,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8127,"internal_metadata":27451457},"hb_peers":[1,2,3],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":3,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-21T14:43:30.255 INFO:tasks.ceph.ceph_manager.ceph:clean! 2026-03-21T14:43:30.255 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-21T14:43:30.255 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy 2026-03-21T14:43:30.255 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph health --format=json 2026-03-21T14:43:30.502 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T14:43:30.503 INFO:teuthology.orchestra.run.vm01.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-21T14:43:30.515 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy done 2026-03-21T14:43:30.515 INFO:teuthology.run_tasks:Running task exec... 2026-03-21T14:43:30.525 INFO:teuthology.task.exec:Executing custom commands... 2026-03-21T14:43:30.526 INFO:teuthology.task.exec:Running commands on role client.0 host ubuntu@vm05.local 2026-03-21T14:43:30.526 DEBUG:teuthology.orchestra.run.vm05:> sudo TESTDIR=/home/ubuntu/cephtest bash -c 'mkdir /home/ubuntu/cephtest/tmpfs' 2026-03-21T14:43:30.551 DEBUG:teuthology.orchestra.run.vm05:> sudo TESTDIR=/home/ubuntu/cephtest bash -c 'mkdir /home/ubuntu/cephtest/rbd-pwl-cache' 2026-03-21T14:43:30.615 DEBUG:teuthology.orchestra.run.vm05:> sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo mount -t tmpfs -o size=20G tmpfs /home/ubuntu/cephtest/tmpfs' 2026-03-21T14:43:30.688 DEBUG:teuthology.orchestra.run.vm05:> sudo TESTDIR=/home/ubuntu/cephtest bash -c 'truncate -s 20G /home/ubuntu/cephtest/tmpfs/loopfile' 2026-03-21T14:43:30.752 DEBUG:teuthology.orchestra.run.vm05:> sudo TESTDIR=/home/ubuntu/cephtest bash -c 'mkfs.ext4 /home/ubuntu/cephtest/tmpfs/loopfile' 2026-03-21T14:43:30.819 INFO:teuthology.orchestra.run.vm05.stderr:mke2fs 1.46.5 (30-Dec-2021) 2026-03-21T14:43:30.820 INFO:teuthology.orchestra.run.vm05.stdout:Discarding device blocks: 0/5242880 done 2026-03-21T14:43:30.820 INFO:teuthology.orchestra.run.vm05.stdout:Creating filesystem with 5242880 4k blocks and 1310720 inodes 2026-03-21T14:43:30.820 INFO:teuthology.orchestra.run.vm05.stdout:Filesystem UUID: d7d0c757-3a53-41fe-b3bb-da9cf85091b3 2026-03-21T14:43:30.820 INFO:teuthology.orchestra.run.vm05.stdout:Superblock backups stored on blocks: 2026-03-21T14:43:30.820 INFO:teuthology.orchestra.run.vm05.stdout: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 2026-03-21T14:43:30.820 INFO:teuthology.orchestra.run.vm05.stdout: 4096000 2026-03-21T14:43:30.820 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:30.820 INFO:teuthology.orchestra.run.vm05.stdout:Allocating group tables: 0/160 done 2026-03-21T14:43:30.820 INFO:teuthology.orchestra.run.vm05.stdout:Writing inode tables: 0/160 done 2026-03-21T14:43:30.821 INFO:teuthology.orchestra.run.vm05.stdout:Creating journal (32768 blocks): done 2026-03-21T14:43:30.822 INFO:teuthology.orchestra.run.vm05.stdout:Writing superblocks and filesystem accounting information: 0/160 done 2026-03-21T14:43:30.822 INFO:teuthology.orchestra.run.vm05.stdout: 2026-03-21T14:43:30.823 DEBUG:teuthology.orchestra.run.vm05:> sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo mount -o loop /home/ubuntu/cephtest/tmpfs/loopfile /home/ubuntu/cephtest/rbd-pwl-cache' 2026-03-21T14:43:30.970 DEBUG:teuthology.orchestra.run.vm05:> sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo chmod 777 /home/ubuntu/cephtest/rbd-pwl-cache' 2026-03-21T14:43:31.003 INFO:teuthology.run_tasks:Running task exec_on_cleanup... 2026-03-21T14:43:31.006 INFO:teuthology.run_tasks:Running task qemu... 2026-03-21T14:43:31.012 INFO:tasks.rbd:Creating image client.0.1 with size 10240 2026-03-21T14:43:31.012 DEBUG:teuthology.orchestra.run.vm05:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rbd --cluster ceph -p rbd create --size 10240 client.0.1 --image-format 2 2026-03-21T14:43:31.098 INFO:tasks.rbd:Creating image client.0.2 with size 10240 2026-03-21T14:43:31.098 DEBUG:teuthology.orchestra.run.vm05:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rbd --cluster ceph -p rbd create --size 10240 client.0.2 --image-format 2 2026-03-21T14:43:31.152 DEBUG:teuthology.orchestra.run.vm05:> install -d -m0755 -- /home/ubuntu/cephtest/qemu /home/ubuntu/cephtest/archive/qemu 2026-03-21T14:43:31.167 INFO:teuthology.packaging:Installing package genisoimage on ubuntu@vm05.local 2026-03-21T14:43:31.167 DEBUG:teuthology.orchestra.run.vm05:> sudo yum -y install genisoimage 2026-03-21T14:43:31.559 INFO:teuthology.orchestra.run.vm05.stdout:Last metadata expiration check: 0:01:12 ago on Sat 21 Mar 2026 02:42:19 PM UTC. 2026-03-21T14:43:31.635 INFO:teuthology.orchestra.run.vm05.stdout:Package genisoimage-1.1.11-48.el9.x86_64 is already installed. 2026-03-21T14:43:31.655 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-21T14:43:31.655 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-21T14:43:31.655 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-21T14:43:31.680 INFO:teuthology.packaging:Installing package qemu-kvm-block-rbd on ubuntu@vm05.local 2026-03-21T14:43:31.680 DEBUG:teuthology.orchestra.run.vm05:> sudo yum -y install qemu-kvm-block-rbd 2026-03-21T14:43:32.040 INFO:teuthology.orchestra.run.vm05.stdout:Last metadata expiration check: 0:01:13 ago on Sat 21 Mar 2026 02:42:19 PM UTC. 2026-03-21T14:43:32.116 INFO:teuthology.orchestra.run.vm05.stdout:Package qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 is already installed. 2026-03-21T14:43:32.137 INFO:teuthology.orchestra.run.vm05.stdout:Dependencies resolved. 2026-03-21T14:43:32.138 INFO:teuthology.orchestra.run.vm05.stdout:Nothing to do. 2026-03-21T14:43:32.138 INFO:teuthology.orchestra.run.vm05.stdout:Complete! 2026-03-21T14:43:32.162 INFO:tasks.qemu:generating iso... 2026-03-21T14:43:32.162 INFO:tasks.qemu:Pulling tests from https://github.com/kshtsk/ceph.git ref 0392f78529848ec72469e8e431875cb98d3a5fb4 2026-03-21T14:43:32.162 DEBUG:teuthology.orchestra.run.vm05:> rm -rf /home/ubuntu/cephtest/qemu_clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/qemu_clone.client.0 && cd /home/ubuntu/cephtest/qemu_clone.client.0 && git checkout 0392f78529848ec72469e8e431875cb98d3a5fb4 2026-03-21T14:43:32.178 INFO:teuthology.orchestra.run.vm05.stderr:Cloning into '/home/ubuntu/cephtest/qemu_clone.client.0'... 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr:Note: switching to '0392f78529848ec72469e8e431875cb98d3a5fb4'. 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr:state without impacting any branches by switching back to a branch. 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr: git switch -c 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr:Or undo this operation with: 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr: git switch - 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr: 2026-03-21T14:44:09.021 INFO:teuthology.orchestra.run.vm05.stderr:HEAD is now at 0392f785298 qa/tasks/keystone: restart mariadb for rocky and alma linux too 2026-03-21T14:44:09.027 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:44:09.027 DEBUG:teuthology.orchestra.run.vm05:> dd of=/home/ubuntu/cephtest/qemu/userdata.client.0 2026-03-21T14:44:09.082 DEBUG:teuthology.orchestra.run.vm05:> set -ex 2026-03-21T14:44:09.082 DEBUG:teuthology.orchestra.run.vm05:> dd of=/home/ubuntu/cephtest/qemu/metadata.client.0 2026-03-21T14:44:09.138 INFO:tasks.qemu:fetching test qa/run_xfstests_qemu.sh for client.0 2026-03-21T14:44:09.138 DEBUG:teuthology.orchestra.run.vm05:> cp -- /home/ubuntu/cephtest/qemu_clone.client.0/qa/run_xfstests_qemu.sh /home/ubuntu/cephtest/qemu/client.0.test.sh && chmod 755 /home/ubuntu/cephtest/qemu/client.0.test.sh 2026-03-21T14:44:09.195 DEBUG:teuthology.orchestra.run.vm05:> genisoimage -quiet -input-charset utf-8 -volid cidata -joliet -rock -o /home/ubuntu/cephtest/qemu/client.0.iso -graft-points user-data=/home/ubuntu/cephtest/qemu/userdata.client.0 meta-data=/home/ubuntu/cephtest/qemu/metadata.client.0 ceph.conf=/etc/ceph/ceph.conf ceph.keyring=/etc/ceph/ceph.keyring test.sh=/home/ubuntu/cephtest/qemu/client.0.test.sh 2026-03-21T14:44:09.253 INFO:tasks.qemu:downloading base image 2026-03-21T14:44:09.254 DEBUG:teuthology.orchestra.run.vm05:> wget -nv -O /home/ubuntu/cephtest/qemu/base.client.0.0.qcow2 http://download.ceph.com/qa/ubuntu-12.04.qcow2 2026-03-21T15:25:51.657 INFO:teuthology.orchestra.run.vm05.stderr:2026-03-21 15:25:51 URL:http://download.ceph.com/qa/ubuntu-12.04.qcow2 [1206124544/1206124544] -> "/home/ubuntu/cephtest/qemu/base.client.0.0.qcow2" [1] 2026-03-21T15:25:51.658 DEBUG:teuthology.orchestra.run.vm05:> qemu-img convert -f qcow2 -O raw /home/ubuntu/cephtest/qemu/base.client.0.0.qcow2 rbd:rbd/client.0.0:conf=/etc/ceph/ceph.conf 2026-03-21T15:25:59.928 DEBUG:teuthology.orchestra.run.vm05:> rbd --cluster ceph resize --size=10240M client.0.0 || true 2026-03-21T15:26:00.084 INFO:teuthology.orchestra.run.vm05.stderr: Resizing image: 100% complete...done. 2026-03-21T15:26:00.108 DEBUG:teuthology.orchestra.run.vm05:> mkdir /home/ubuntu/cephtest/archive/qemu/client.0 && sudo modprobe kvm 2026-03-21T15:26:00.133 INFO:tasks.qemu:Creating the nfs export directory... 2026-03-21T15:26:00.133 DEBUG:teuthology.orchestra.run.vm05:> sudo mkdir -p /export/client.0 2026-03-21T15:26:00.194 INFO:tasks.qemu:Mounting the test directory... 2026-03-21T15:26:00.194 DEBUG:teuthology.orchestra.run.vm05:> sudo mount --bind /home/ubuntu/cephtest/archive/qemu/client.0 /export/client.0 2026-03-21T15:26:00.256 INFO:tasks.qemu:Adding mount to /etc/exports... 2026-03-21T15:26:00.256 INFO:tasks.qemu:Deleting export from /etc/exports... 2026-03-21T15:26:00.256 DEBUG:teuthology.orchestra.run.vm05:> sudo sed -i '\|/export/client.0|d' /etc/exports 2026-03-21T15:26:00.318 DEBUG:teuthology.orchestra.run.vm05:> echo '/export/client.0 *(rw,no_root_squash,no_subtree_check,insecure)' | sudo tee -a /etc/exports 2026-03-21T15:26:00.380 INFO:teuthology.orchestra.run.vm05.stdout:/export/client.0 *(rw,no_root_squash,no_subtree_check,insecure) 2026-03-21T15:26:00.381 INFO:tasks.qemu:Restarting NFS... 2026-03-21T15:26:00.381 DEBUG:teuthology.orchestra.run.vm05:> sudo systemctl restart nfs-server 2026-03-21T15:26:00.784 DEBUG:teuthology.orchestra.run.vm05:> sudo udevadm control --reload 2026-03-21T15:26:00.819 DEBUG:teuthology.orchestra.run.vm05:> sudo udevadm trigger /dev/kvm 2026-03-21T15:26:00.896 DEBUG:teuthology.orchestra.run.vm05:> ls -l /dev/kvm 2026-03-21T15:26:00.952 INFO:teuthology.orchestra.run.vm05.stdout:crw-rw-rw-. 1 root kvm 10, 232 Mar 21 15:26 /dev/kvm 2026-03-21T15:26:00.952 INFO:tasks.qemu:starting qemu... 2026-03-21T15:26:00.952 DEBUG:teuthology.orchestra.run.vm05:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term /usr/libexec/qemu-kvm -enable-kvm -nographic -cpu host -smp 4 -m 4096 -cdrom /home/ubuntu/cephtest/qemu/client.0.iso -drive file=rbd:rbd/client.0.0:conf=/etc/ceph/ceph.conf:id=0,format=raw,if=virtio,cache=writeback -drive file=rbd:rbd/client.0.1:conf=/etc/ceph/ceph.conf:id=0,format=raw,if=virtio,cache=writeback -drive file=rbd:rbd/client.0.2:conf=/etc/ceph/ceph.conf:id=0,format=raw,if=virtio,cache=writeback 2026-03-21T15:26:00.995 DEBUG:teuthology.run_tasks:Unwinding manager qemu 2026-03-21T15:26:00.997 INFO:tasks.qemu:waiting for qemu tests to finish... 2026-03-21T15:26:01.383 INFO:tasks.qemu.client.0.vm05.stderr:qemu-kvm: warning: Machine type 'pc-i440fx-rhel7.6.0' is deprecated: machines from the previous RHEL major release are subject to deletion in the next RHEL major release 2026-03-21T15:26:01.482 INFO:tasks.qemu.client.0.vm05.stdout:c[?7lSeaBIOS (version 1.16.3-5.el9) 2026-03-21T15:26:01.482 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:01.482 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:01.487 INFO:tasks.qemu.client.0.vm05.stdout:iPXE (http://ipxe.org) 00:03.0 CA00 PCI2.10 PnP PMM+BEFCCC40+BEF0CC40 CA00 2026-03-21T15:26:01.500 INFO:tasks.qemu.client.0.vm05.stdout:Press Ctrl-B to configure iPXE (PCI 00:03.0)... 2026-03-21T15:26:01.500 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:01.500 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:01.702 INFO:tasks.qemu.client.0.vm05.stdout:Booting from Hard Disk... 2026-03-21T15:26:02.008 INFO:tasks.qemu.client.0.vm05.stdout:GNU GRUB version 1.99-21ubuntu3.4┌──────────────────────────────────────────────────────────────────────────┐││││││││││││││││││││││││││└──────────────────────────────────────────────────────────────────────────┘ 2026-03-21T15:26:02.008 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.014 INFO:tasks.qemu.client.0.vm05.stdout: Use the ↑ and ↓ keys to select which entry is highlighted. 2026-03-21T15:26:02.020 INFO:tasks.qemu.client.0.vm05.stdout: Press enter to boot the selected OS, 'e' to edit the commands 2026-03-21T15:26:02.024 INFO:tasks.qemu.client.0.vm05.stdout: before booting or 'c' for a command-line. 2026-03-21T15:26:02.107 INFO:tasks.qemu.client.0.vm05.stdout:  Ubuntu, with Linux 3.2.0-32-virtual  Ubuntu, with Linux 3.2.0-32-virtual (recovery mode) Memory test (memtest86+) Memory test (memtest86+, serial console 115200)            2026-03-21T15:26:02.107 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.107 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.107 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.107 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.107 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: The highlighted entry will be executed automatically in 5s.  2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:02.957 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: The highlighted entry will be executed automatically in 4s.  2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:03.958 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: The highlighted entry will be executed automatically in 3s.  2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:04.959 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: The highlighted entry will be executed automatically in 2s.  2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:05.960 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.961 INFO:tasks.qemu.client.0.vm05.stdout: The highlighted entry will be executed automatically in 1s.  2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:06.962 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:07.814 INFO:tasks.qemu.client.0.vm05.stdout: The highlighted entry will be executed automatically in 0s. [ 0.000000] Initializing cgroup subsys cpuset 2026-03-21T15:26:07.815 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Initializing cgroup subsys cpu 2026-03-21T15:26:07.823 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Linux version 3.2.0-32-virtual (buildd@batsu) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #51-Ubuntu SMP Wed Sep 26 21:53:42 UTC 2012 (Ubuntu 3.2.0-32.51-virtual 3.2.30) 2026-03-21T15:26:07.827 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-32-virtual root=LABEL=cloudimg-rootfs ro console=ttyS0 2026-03-21T15:26:07.829 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] KERNEL supported cpus: 2026-03-21T15:26:07.830 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Intel GenuineIntel 2026-03-21T15:26:07.832 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] AMD AuthenticAMD 2026-03-21T15:26:07.833 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Centaur CentaurHauls 2026-03-21T15:26:07.835 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] BIOS-provided physical RAM map: 2026-03-21T15:26:07.838 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] BIOS-e820: 0000000000000000 - 000000000009fc00 (usable) 2026-03-21T15:26:07.840 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved) 2026-03-21T15:26:07.843 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved) 2026-03-21T15:26:07.846 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] BIOS-e820: 0000000000100000 - 00000000bffd7000 (usable) 2026-03-21T15:26:07.849 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] BIOS-e820: 00000000bffd7000 - 00000000c0000000 (reserved) 2026-03-21T15:26:07.852 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] BIOS-e820: 00000000feffc000 - 00000000ff000000 (reserved) 2026-03-21T15:26:07.854 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved) 2026-03-21T15:26:07.857 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] BIOS-e820: 0000000100000000 - 0000000140000000 (usable) 2026-03-21T15:26:07.859 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] NX (Execute Disable) protection: active 2026-03-21T15:26:07.861 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] DMI 2.8 present. 2026-03-21T15:26:07.862 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] No AGP bridge found 2026-03-21T15:26:07.865 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] last_pfn = 0x140000 max_arch_pfn = 0x400000000 2026-03-21T15:26:07.868 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106 2026-03-21T15:26:07.870 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] last_pfn = 0xbffd7 max_arch_pfn = 0x400000000 2026-03-21T15:26:07.872 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] found SMP MP-table at [ffff8800000f5440] f5440 2026-03-21T15:26:07.874 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Using GB pages for direct mapping 2026-03-21T15:26:07.877 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] init_memory_mapping: 0000000000000000-00000000bffd7000 2026-03-21T15:26:07.880 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] init_memory_mapping: 0000000100000000-0000000140000000 2026-03-21T15:26:07.882 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] RAMDISK: 37786000 - 37bbb000 2026-03-21T15:26:07.884 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: RSDP 00000000000f5400 00014 (v00 BOCHS ) 2026-03-21T15:26:07.888 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: RSDT 00000000bffe20a7 00030 (v01 BOCHS BXPC 00000001 BXPC 00000001) 2026-03-21T15:26:07.891 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: FACP 00000000bffe1f7b 00074 (v01 BOCHS BXPC 00000001 BXPC 00000001) 2026-03-21T15:26:07.895 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: DSDT 00000000bffdfd40 0223B (v01 BOCHS BXPC 00000001 BXPC 00000001) 2026-03-21T15:26:07.897 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: FACS 00000000bffdfd00 00040 2026-03-21T15:26:07.900 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: APIC 00000000bffe1fef 00090 (v03 BOCHS BXPC 00000001 BXPC 00000001) 2026-03-21T15:26:07.903 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: WAET 00000000bffe207f 00028 (v01 BOCHS BXPC 00000001 BXPC 00000001) 2026-03-21T15:26:07.905 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] No NUMA configuration found 2026-03-21T15:26:07.907 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Faking a node at 0000000000000000-0000000140000000 2026-03-21T15:26:07.910 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Initmem setup node 0 0000000000000000-0000000140000000 2026-03-21T15:26:07.912 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] NODE_DATA [000000013fffb000 - 000000013fffffff] 2026-03-21T15:26:07.915 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] kvm-clock: Using msrs 4b564d01 and 4b564d00 2026-03-21T15:26:07.917 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] kvm-clock: cpu 0, msr 0:1cf6681, boot clock 2026-03-21T15:26:07.918 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Zone PFN ranges: 2026-03-21T15:26:07.920 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] DMA 0x00000010 -> 0x00001000 2026-03-21T15:26:07.921 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] DMA32 0x00001000 -> 0x00100000 2026-03-21T15:26:07.924 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Normal 0x00100000 -> 0x00140000 2026-03-21T15:26:07.926 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Movable zone start PFN for each node 2026-03-21T15:26:07.927 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] early_node_map[3] active PFN ranges 2026-03-21T15:26:07.929 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] 0: 0x00000010 -> 0x0000009f 2026-03-21T15:26:07.931 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] 0: 0x00000100 -> 0x000bffd7 2026-03-21T15:26:07.932 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] 0: 0x00100000 -> 0x00140000 2026-03-21T15:26:07.934 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: PM-Timer IO Port: 0x608 2026-03-21T15:26:07.936 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) 2026-03-21T15:26:07.939 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) 2026-03-21T15:26:07.941 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) 2026-03-21T15:26:07.944 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled) 2026-03-21T15:26:07.946 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) 2026-03-21T15:26:07.949 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: IOAPIC (id[0x00] address[0xfec00000] gsi_base[0]) 2026-03-21T15:26:07.952 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 2026-03-21T15:26:07.955 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) 2026-03-21T15:26:07.958 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) 2026-03-21T15:26:07.961 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) 2026-03-21T15:26:07.964 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) 2026-03-21T15:26:07.967 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) 2026-03-21T15:26:07.969 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Using ACPI (MADT) for SMP configuration information 2026-03-21T15:26:07.971 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] SMP: Allowing 4 CPUs, 0 hotplug CPUs 2026-03-21T15:26:07.974 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] PM: Registered nosave memory: 000000000009f000 - 00000000000a0000 2026-03-21T15:26:07.977 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000 2026-03-21T15:26:07.980 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] PM: Registered nosave memory: 00000000000f0000 - 0000000000100000 2026-03-21T15:26:07.983 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] PM: Registered nosave memory: 00000000bffd7000 - 00000000c0000000 2026-03-21T15:26:07.986 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] PM: Registered nosave memory: 00000000c0000000 - 00000000feffc000 2026-03-21T15:26:07.989 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] PM: Registered nosave memory: 00000000feffc000 - 00000000ff000000 2026-03-21T15:26:07.992 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] PM: Registered nosave memory: 00000000ff000000 - 00000000fffc0000 2026-03-21T15:26:07.995 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] PM: Registered nosave memory: 00000000fffc0000 - 0000000100000000 2026-03-21T15:26:07.999 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Allocating PCI resources starting at c0000000 (gap: c0000000:3effc000) 2026-03-21T15:26:08.001 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Booting paravirtualized kernel on KVM 2026-03-21T15:26:08.004 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] setup_percpu: NR_CPUS:64 nr_cpumask_bits:64 nr_cpu_ids:4 nr_node_ids:1 2026-03-21T15:26:08.008 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] PERCPU: Embedded 28 pages/cpu @ffff88013fc00000 s82880 r8192 d23616 u524288 2026-03-21T15:26:08.010 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] kvm-clock: cpu 0, msr 1:3fc13681, primary cpu clock 2026-03-21T15:26:08.012 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] KVM setup async PF for cpu 0 2026-03-21T15:26:08.014 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] kvm-stealtime: cpu 0, msr 13fc0dd40 2026-03-21T15:26:08.018 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Built 1 zonelists in Node order, mobility grouping on. Total pages: 1027937 2026-03-21T15:26:08.019 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Policy zone: Normal 2026-03-21T15:26:08.023 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-32-virtual root=LABEL=cloudimg-rootfs ro console=ttyS0 2026-03-21T15:26:08.026 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) 2026-03-21T15:26:08.029 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] xsave/xrstor: enabled xstate_bv 0x7, cntxt size 0x340 2026-03-21T15:26:08.030 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Checking aperture... 2026-03-21T15:26:08.032 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] No AGP bridge found 2026-03-21T15:26:08.036 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Memory: 4041548k/5242880k available (6532k kernel code, 1049192k absent, 152140k reserved, 6657k data, 924k init) 2026-03-21T15:26:08.039 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 2026-03-21T15:26:08.042 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Hierarchical RCU implementation. 2026-03-21T15:26:08.045 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] RCU dyntick-idle grace-period acceleration is enabled. 2026-03-21T15:26:08.045 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] NR_IRQS:4352 nr_irqs:712 16 2026-03-21T15:26:08.046 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Console: colour VGA+ 80x25 2026-03-21T15:26:08.047 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] console [ttyS0] enabled 2026-03-21T15:26:08.053 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] allocated 33554432 bytes of page_cgroup 2026-03-21T15:26:08.056 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] please try 'cgroup_disable=memory' option if you don't want memory cgroups 2026-03-21T15:26:08.058 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Detected 4192.140 MHz processor. 2026-03-21T15:26:08.060 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.000000] Marking TSC unstable due to TSCs unsynchronized 2026-03-21T15:26:08.064 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.008000] Calibrating delay loop (skipped) preset value.. 8384.28 BogoMIPS (lpj=16768560) 2026-03-21T15:26:08.066 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.009265] pid_max: default: 32768 minimum: 301 2026-03-21T15:26:08.068 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.012027] Security Framework initialized 2026-03-21T15:26:08.069 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.013718] AppArmor: AppArmor initialized 2026-03-21T15:26:08.071 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.016011] Yama: becoming mindful. 2026-03-21T15:26:08.074 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.017902] Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes) 2026-03-21T15:26:08.078 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.020609] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes) 2026-03-21T15:26:08.080 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.024117] Mount-cache hash table entries: 256 2026-03-21T15:26:08.082 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.026192] Initializing cgroup subsys cpuacct 2026-03-21T15:26:08.084 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.028013] Initializing cgroup subsys memory 2026-03-21T15:26:08.086 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.029932] Initializing cgroup subsys devices 2026-03-21T15:26:08.088 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.032012] Initializing cgroup subsys freezer 2026-03-21T15:26:08.089 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.033870] Initializing cgroup subsys blkio 2026-03-21T15:26:08.091 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.036013] Initializing cgroup subsys perf_event 2026-03-21T15:26:08.094 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.038812] mce: CPU supports 10 MCE banks 2026-03-21T15:26:08.154 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.100033] ACPI: Core revision 20110623 2026-03-21T15:26:08.157 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.101927] ftrace: allocating 27008 entries in 106 pages 2026-03-21T15:26:08.397 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.113165] Enabling x2apic 2026-03-21T15:26:08.398 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.114309] Enabled x2apic 2026-03-21T15:26:08.401 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.116017] Switched APIC routing to physical x2apic. 2026-03-21T15:26:08.409 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.126067] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 2026-03-21T15:26:08.412 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.128008] CPU0: AMD Ryzen 9 7950X3D 16-Core Processor stepping 02 2026-03-21T15:26:08.519 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.132007] Performance Events: no PMU driver, software events only. 2026-03-21T15:26:08.522 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.132007] NMI watchdog disabled (cpu0): hardware events not enabled 2026-03-21T15:26:08.524 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.132163] Booting Node 0, Processors #1 2026-03-21T15:26:08.539 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.148055] NMI watchdog disabled (cpu1): hardware events not enabled 2026-03-21T15:26:08.541 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.148053] KVM setup async PF for cpu 1 2026-03-21T15:26:08.543 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.148053] kvm-stealtime: cpu 1, msr 13fc8dd40 2026-03-21T15:26:08.545 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.148053] kvm-clock: cpu 1, msr 1:3fc93681, secondary cpu clock 2026-03-21T15:26:08.546 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.152149] #2 2026-03-21T15:26:08.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.166507] NMI watchdog disabled (cpu2): hardware events not enabled 2026-03-21T15:26:08.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.166506] KVM setup async PF for cpu 2 2026-03-21T15:26:08.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.166506] kvm-stealtime: cpu 2, msr 13fd0dd40 2026-03-21T15:26:08.568 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.166506] kvm-clock: cpu 2, msr 1:3fd13681, secondary cpu clock 2026-03-21T15:26:08.569 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.168147] #3 Ok. 2026-03-21T15:26:08.584 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.185356] NMI watchdog disabled (cpu3): hardware events not enabled 2026-03-21T15:26:08.586 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.185355] KVM setup async PF for cpu 3 2026-03-21T15:26:08.588 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.185355] kvm-stealtime: cpu 3, msr 13fd8dd40 2026-03-21T15:26:08.590 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.185355] kvm-clock: cpu 3, msr 1:3fd93681, secondary cpu clock 2026-03-21T15:26:08.592 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.188049] Brought up 4 CPUs 2026-03-21T15:26:08.594 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.189281] Total of 4 processors activated (33537.12 BogoMIPS). 2026-03-21T15:26:08.598 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.196121] devtmpfs: initialized 2026-03-21T15:26:08.600 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.197822] EVM: security.selinux 2026-03-21T15:26:08.602 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.199177] EVM: security.SMACK64 2026-03-21T15:26:08.603 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.200014] EVM: security.capability 2026-03-21T15:26:08.605 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.201605] print_constraints: dummy: 2026-03-21T15:26:08.607 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.204186] RTC time: 15:26:08, date: 03/21/26 2026-03-21T15:26:08.609 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.206120] NET: Registered protocol family 16 2026-03-21T15:26:08.613 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.209821] Extended Config Space enabled on 0 nodes 2026-03-21T15:26:08.622 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.212297] ACPI: bus type pci registered 2026-03-21T15:26:08.622 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.212355] Trying to unpack rootfs image as initramfs... 2026-03-21T15:26:08.622 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.216255] PCI: Using configuration type 1 for base access 2026-03-21T15:26:08.622 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.216255] PCI: Using configuration type 1 for extended access 2026-03-21T15:26:08.624 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.221120] bio: create slab at 0 2026-03-21T15:26:08.627 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.223693] ACPI: Added _OSI(Module Device) 2026-03-21T15:26:08.630 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.224015] ACPI: Added _OSI(Processor Device) 2026-03-21T15:26:08.633 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.226494] ACPI: Added _OSI(3.0 _SCP Extensions) 2026-03-21T15:26:08.637 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.232017] ACPI: Added _OSI(Processor Aggregator Device) 2026-03-21T15:26:08.641 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.236683] ACPI: Interpreter enabled 2026-03-21T15:26:08.643 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.238525] ACPI: (supports S0 S5) 2026-03-21T15:26:08.646 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.240496] ACPI: Using IOAPIC for interrupt routing 2026-03-21T15:26:08.649 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.245041] ACPI: No dock devices found. 2026-03-21T15:26:08.651 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.247252] HEST: Table not found. 2026-03-21T15:26:08.657 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.248017] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug 2026-03-21T15:26:08.660 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.256047] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) 2026-03-21T15:26:08.664 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.256065] pci_root PNP0A03:00: host bridge window [io 0x0000-0x0cf7] 2026-03-21T15:26:08.668 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.260022] pci_root PNP0A03:00: host bridge window [io 0x0d00-0xffff] 2026-03-21T15:26:08.673 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.264020] pci_root PNP0A03:00: host bridge window [mem 0x000a0000-0x000bffff] 2026-03-21T15:26:08.676 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.268033] Freeing initrd memory: 4308k freed 2026-03-21T15:26:08.680 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.272037] pci_root PNP0A03:00: host bridge window [mem 0xc0000000-0xfebfffff] 2026-03-21T15:26:08.684 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.276024] pci_root PNP0A03:00: host bridge window [mem 0x380000000000-0x38007fffffff] 2026-03-21T15:26:08.697 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.290486] pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI 2026-03-21T15:26:08.701 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.296054] pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB 2026-03-21T15:26:08.751 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.346013] pci0000:00: Requesting ACPI _OSC control (0x1d) 2026-03-21T15:26:08.756 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.348034] pci0000:00: ACPI _OSC request failed (AE_NOT_FOUND), returned control mask: 0x1d 2026-03-21T15:26:08.760 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.352024] ACPI _OSC control for PCIe not granted, disabling ASPM 2026-03-21T15:26:08.764 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.356750] ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11) 2026-03-21T15:26:08.768 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.361001] ACPI: PCI Interrupt Link [LNKB] (IRQs 5 *10 11) 2026-03-21T15:26:08.772 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.365090] ACPI: PCI Interrupt Link [LNKC] (IRQs 5 10 *11) 2026-03-21T15:26:08.775 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.369029] ACPI: PCI Interrupt Link [LNKD] (IRQs 5 10 *11) 2026-03-21T15:26:08.778 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.372968] ACPI: PCI Interrupt Link [LNKS] (IRQs *9) 2026-03-21T15:26:08.783 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.377817] vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none 2026-03-21T15:26:08.785 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.380027] vgaarb: loaded 2026-03-21T15:26:08.788 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.381629] vgaarb: bridge control possible 0000:00:02.0 2026-03-21T15:26:08.792 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.384077] i2c-core: driver [aat2870] using legacy suspend method 2026-03-21T15:26:08.796 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.388026] i2c-core: driver [aat2870] using legacy resume method 2026-03-21T15:26:08.798 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.392074] SCSI subsystem initialized 2026-03-21T15:26:08.801 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.397415] usbcore: registered new interface driver usbfs 2026-03-21T15:26:08.805 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.399289] usbcore: registered new interface driver hub 2026-03-21T15:26:08.808 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.404117] usbcore: registered new device driver usb 2026-03-21T15:26:08.810 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.407028] PCI: Using ACPI for IRQ routing 2026-03-21T15:26:08.813 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.408595] NetLabel: Initializing 2026-03-21T15:26:08.815 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.410789] NetLabel: domain hash size = 128 2026-03-21T15:26:08.818 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.412027] NetLabel: protocols = UNLABELED CIPSOv4 2026-03-21T15:26:08.821 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.416033] NetLabel: unlabeled traffic allowed by default 2026-03-21T15:26:08.824 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.420632] Switching to clocksource kvm-clock 2026-03-21T15:26:08.829 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.423415] AppArmor: AppArmor Filesystem Enabled 2026-03-21T15:26:08.830 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.425131] pnp: PnP ACPI init 2026-03-21T15:26:08.832 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.426266] ACPI: bus type pnp registered 2026-03-21T15:26:08.834 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.428335] pnp: PnP ACPI: found 9 devices 2026-03-21T15:26:08.835 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.429793] ACPI: ACPI bus type pnp unregistered 2026-03-21T15:26:08.842 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.436742] NET: Registered protocol family 2 2026-03-21T15:26:08.845 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.438334] IP route cache hash table entries: 131072 (order: 8, 1048576 bytes) 2026-03-21T15:26:08.849 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.441933] TCP established hash table entries: 524288 (order: 11, 8388608 bytes) 2026-03-21T15:26:08.852 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.445435] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes) 2026-03-21T15:26:08.854 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.447880] TCP: Hash tables configured (established 524288 bind 65536) 2026-03-21T15:26:08.855 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.450264] TCP reno registered 2026-03-21T15:26:08.859 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.451389] UDP hash table entries: 2048 (order: 4, 65536 bytes) 2026-03-21T15:26:08.861 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.454719] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes) 2026-03-21T15:26:08.863 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.457189] NET: Registered protocol family 1 2026-03-21T15:26:08.865 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.458798] pci 0000:00:00.0: Limiting direct PCI/PCI transfers 2026-03-21T15:26:08.867 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.460996] pci 0000:00:01.0: PIIX3: Enabling Passive Release 2026-03-21T15:26:08.869 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.463105] pci 0000:00:01.0: Activating ISA DMA hang workarounds 2026-03-21T15:26:08.872 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.465572] PCI-DMA: Using software bounce buffering for IO (SWIOTLB) 2026-03-21T15:26:08.875 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.467881] Placing 64MB software IO TLB between ffff8800bbfd7000 - ffff8800bffd7000 2026-03-21T15:26:08.877 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.470674] software IO TLB at phys 0xbbfd7000 - 0xbffd7000 2026-03-21T15:26:08.880 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.474104] audit: initializing netlink socket (disabled) 2026-03-21T15:26:08.883 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.476296] type=2000 audit(1774106767.476:1): initialized 2026-03-21T15:26:08.896 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.489305] HugeTLB registered 2 MB page size, pre-allocated 0 pages 2026-03-21T15:26:08.898 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.492852] VFS: Disk quotas dquot_6.5.2 2026-03-21T15:26:08.901 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.494446] Dquot-cache hash table entries: 512 (order 0, 4096 bytes) 2026-03-21T15:26:08.903 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.497217] fuse init (API version 7.17) 2026-03-21T15:26:08.904 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.498790] msgmni has been set to 7902 2026-03-21T15:26:08.908 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.501701] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) 2026-03-21T15:26:08.910 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.504431] io scheduler noop registered 2026-03-21T15:26:08.912 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.505790] io scheduler deadline registered (default) 2026-03-21T15:26:08.913 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.507630] io scheduler cfq registered 2026-03-21T15:26:08.915 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.509047] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 2026-03-21T15:26:08.917 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.510989] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 2026-03-21T15:26:08.920 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.513383] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 2026-03-21T15:26:08.922 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.516237] ACPI: Power Button [PWRF] 2026-03-21T15:26:08.923 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.517909] ERST: Table is not found! 2026-03-21T15:26:08.925 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.519225] GHES: HEST is not enabled! 2026-03-21T15:26:08.927 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.521211] ACPI: PCI Interrupt Link [LNKD] enabled at IRQ 11 2026-03-21T15:26:08.930 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.523268] virtio-pci 0000:00:04.0: PCI INT A -> Link[LNKD] -> GSI 11 (level, high) -> IRQ 11 2026-03-21T15:26:08.933 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.527239] ACPI: PCI Interrupt Link [LNKA] enabled at IRQ 10 2026-03-21T15:26:08.936 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.529312] virtio-pci 0000:00:05.0: PCI INT A -> Link[LNKA] -> GSI 10 (level, high) -> IRQ 10 2026-03-21T15:26:08.939 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.533185] ACPI: PCI Interrupt Link [LNKB] enabled at IRQ 10 2026-03-21T15:26:08.942 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.535265] virtio-pci 0000:00:06.0: PCI INT A -> Link[LNKB] -> GSI 10 (level, high) -> IRQ 10 2026-03-21T15:26:08.945 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.538754] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled 2026-03-21T15:26:08.974 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.568509] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A 2026-03-21T15:26:09.168 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.761933] 00:07: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A 2026-03-21T15:26:09.174 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.768306] Linux agpgart interface v0.103 2026-03-21T15:26:09.176 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.771231] brd: module loaded 2026-03-21T15:26:09.178 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.773187] loop: module loaded 2026-03-21T15:26:09.182 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.776594] vda: vda1 2026-03-21T15:26:09.186 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.780752] vdb: unknown partition table 2026-03-21T15:26:09.192 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.785440] vdc: unknown partition table 2026-03-21T15:26:09.194 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.788857] scsi0 : ata_piix 2026-03-21T15:26:09.196 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.790387] scsi1 : ata_piix 2026-03-21T15:26:09.199 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.791700] ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc100 irq 14 2026-03-21T15:26:09.202 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.794776] ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc108 irq 15 2026-03-21T15:26:09.204 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.798079] Fixed MDIO Bus: probed 2026-03-21T15:26:09.205 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.799490] tun: Universal TUN/TAP device driver, 1.6 2026-03-21T15:26:09.208 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.801326] tun: (C) 1999-2004 Max Krasnyansky 2026-03-21T15:26:09.210 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.804163] PPP generic driver version 2.4.2 2026-03-21T15:26:09.213 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.806088] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver 2026-03-21T15:26:09.215 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.808828] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver 2026-03-21T15:26:09.218 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.811359] uhci_hcd: USB Universal Host Controller Interface driver 2026-03-21T15:26:09.221 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.814232] usbcore: registered new interface driver libusual 2026-03-21T15:26:09.224 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.816843] i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 2026-03-21T15:26:09.228 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.822084] serio: i8042 KBD port at 0x60,0x64 irq 1 2026-03-21T15:26:09.231 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.824327] serio: i8042 AUX port at 0x60,0x64 irq 12 2026-03-21T15:26:09.233 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.826619] mousedev: PS/2 mouse device common for all mice 2026-03-21T15:26:09.237 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.829669] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 2026-03-21T15:26:09.240 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.833875] rtc_cmos 00:08: RTC can wake from S4 2026-03-21T15:26:09.243 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.836420] rtc_cmos 00:08: rtc core: registered rtc_cmos as rtc0 2026-03-21T15:26:09.246 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.839380] rtc0: alarms up to one day, y3k, 242 bytes nvram 2026-03-21T15:26:09.248 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.841977] device-mapper: uevent: version 1.0.3 2026-03-21T15:26:09.252 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.844104] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised: dm-devel@redhat.com 2026-03-21T15:26:09.254 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.847814] cpuidle: using governor ladder 2026-03-21T15:26:09.256 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.849718] cpuidle: using governor menu 2026-03-21T15:26:09.258 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.851506] EFI Variables Facility v0.08 2004-May-17 2026-03-21T15:26:09.259 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.853588] TCP cubic registered 2026-03-21T15:26:09.261 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.855069] NET: Registered protocol family 10 2026-03-21T15:26:09.263 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.857210] NET: Registered protocol family 17 2026-03-21T15:26:09.265 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.859128] Registering the dns_resolver key type 2026-03-21T15:26:09.267 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.860849] registered taskstats version 1 2026-03-21T15:26:09.275 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.869168] Magic number: 2:525:437 2026-03-21T15:26:09.279 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.871123] rtc_cmos 00:08: setting system clock to 2026-03-21 15:26:08 UTC (1774106768) 2026-03-21T15:26:09.282 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.874918] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found 2026-03-21T15:26:09.283 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.877606] EDD information not available. 2026-03-21T15:26:09.372 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.965751] ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 2026-03-21T15:26:09.375 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.968995] ata2.00: configured for MWDMA2 2026-03-21T15:26:09.380 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.971743] scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 2026-03-21T15:26:09.383 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.976759] sr0: scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray 2026-03-21T15:26:09.385 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.978929] cdrom: Uniform CD-ROM driver Revision: 3.20 2026-03-21T15:26:09.388 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.981363] sr 1:0:0:0: Attached scsi generic sg0 type 5 2026-03-21T15:26:09.392 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.985292] Freeing unused kernel memory: 924k freed 2026-03-21T15:26:09.395 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.987777] Write protecting the kernel read-only data: 12288k 2026-03-21T15:26:09.402 INFO:tasks.qemu.client.0.vm05.stdout:[ 0.994860] Freeing unused kernel memory: 1640k freed 2026-03-21T15:26:09.408 INFO:tasks.qemu.client.0.vm05.stdout:[ 1.000698] Freeing unused kernel memory: 1200k freed 2026-03-21T15:26:09.414 INFO:tasks.qemu.client.0.vm05.stdout:Loading, please wait... 2026-03-21T15:26:09.428 INFO:tasks.qemu.client.0.vm05.stdout:[ 1.018373] udevd[98]: starting version 175 2026-03-21T15:26:09.429 INFO:tasks.qemu.client.0.vm05.stdout:Begin: Loading essential drivers ... done. 2026-03-21T15:26:09.435 INFO:tasks.qemu.client.0.vm05.stdout:Begin: Running /scripts/init-premount ... done. 2026-03-21T15:26:09.446 INFO:tasks.qemu.client.0.vm05.stdout:[ 1.038228] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI 2026-03-21T15:26:09.449 INFO:tasks.qemu.client.0.vm05.stdout:[ 1.041828] e1000: Copyright (c) 1999-2006 Intel Corporation. 2026-03-21T15:26:09.452 INFO:tasks.qemu.client.0.vm05.stdout:Begin: Mounting [ 1.045665] ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 11 2026-03-21T15:26:09.456 INFO:tasks.qemu.client.0.vm05.stdout:root file system[ 1.048397] e1000 0000:00:03.0: PCI INT A -> Link[LNKC] -> GSI 11 (level, high) -> IRQ 11 2026-03-21T15:26:09.462 INFO:tasks.qemu.client.0.vm05.stdout: ... Begin: Running /scripts/local-top ... done. 2026-03-21T15:26:09.471 INFO:tasks.qemu.client.0.vm05.stdout:[ 1.064369] Floppy drive(s): fd0 is 2.88M AMI BIOS 2026-03-21T15:26:09.489 INFO:tasks.qemu.client.0.vm05.stdout:[ 1.083005] FDC 0 is a S82078B 2026-03-21T15:26:09.566 INFO:tasks.qemu.client.0.vm05.stdout:Begin: Running /scripts/local-premount ... done. 2026-03-21T15:26:09.574 INFO:tasks.qemu.client.0.vm05.stdout:[ 1.166956] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null) 2026-03-21T15:26:09.916 INFO:tasks.qemu.client.0.vm05.stdout:Begin: Running /scripts/local-bottom ... [ 1.509755] e1000 0000:00:03.0: eth0: (PCI:33MHz:32-bit) 52:54:00:12:34:56 2026-03-21T15:26:09.919 INFO:tasks.qemu.client.0.vm05.stdout:[ 1.512292] e1000 0000:00:03.0: eth0: Intel(R) PRO/1000 Network Connection 2026-03-21T15:26:10.010 INFO:tasks.qemu.client.0.vm05.stdout:[ 1.605312] vda: vda1 2026-03-21T15:26:10.015 INFO:tasks.qemu.client.0.vm05.stdout:GROWROOT: CHANGED: partition=1 start=16065 old: size=4176900 end=4192965 new: size=20948760,end=20964825 2026-03-21T15:26:10.204 INFO:tasks.qemu.client.0.vm05.stdout:[ 1.796181] EXT4-fs (vda1): mounted filesystem with ordered data mode. Opts: (null) 2026-03-21T15:26:10.206 INFO:tasks.qemu.client.0.vm05.stdout:done. 2026-03-21T15:26:10.206 INFO:tasks.qemu.client.0.vm05.stdout:done. 2026-03-21T15:26:10.274 INFO:tasks.qemu.client.0.vm05.stdout:Begin: Running /scripts/init-bottom ... done. 2026-03-21T15:26:12.884 INFO:tasks.qemu.client.0.vm05.stdout:cloud-init start-local running: Sat, 21 Mar 2026 15:26:09 +0000. up 2.30 seconds 2026-03-21T15:26:12.885 INFO:tasks.qemu.client.0.vm05.stdout:no instance data found in start-local 2026-03-21T15:26:14.840 INFO:tasks.qemu.client.0.vm05.stdout:ci-info: lo : 1 127.0.0.1 255.0.0.0 . 2026-03-21T15:26:14.842 INFO:tasks.qemu.client.0.vm05.stdout:ci-info: eth0 : 1 10.0.2.15 255.255.255.0 52:54:00:12:34:56 2026-03-21T15:26:14.850 INFO:tasks.qemu.client.0.vm05.stdout:ci-info: route-0: 0.0.0.0 10.0.2.2 0.0.0.0 eth0 UG 2026-03-21T15:26:14.852 INFO:tasks.qemu.client.0.vm05.stdout:ci-info: route-1: 10.0.2.0 0.0.0.0 255.255.255.0 eth0 U 2026-03-21T15:26:14.854 INFO:tasks.qemu.client.0.vm05.stdout:cloud-init start running: Sat, 21 Mar 2026 15:26:14 +0000. up 6.42 seconds 2026-03-21T15:26:14.964 INFO:tasks.qemu.client.0.vm05.stdout:found data source: DataSourceNoCloud [seed=/dev/sr0] 2026-03-21T15:26:15.868 INFO:tasks.qemu.client.0.vm05.stdout:WARN:stdout, stderr changing to (| tee -a /var/log/cloud-init-output.log,| tee -a /var/log/cloud-init-output.log)Generating public/private rsa key pair. 2026-03-21T15:26:15.868 INFO:tasks.qemu.client.0.vm05.stdout:Your identification has been saved in /etc/ssh/ssh_host_rsa_key. 2026-03-21T15:26:15.868 INFO:tasks.qemu.client.0.vm05.stdout:Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub. 2026-03-21T15:26:15.868 INFO:tasks.qemu.client.0.vm05.stdout:The key fingerprint is: 2026-03-21T15:26:15.870 INFO:tasks.qemu.client.0.vm05.stdout:b2:ad:ca:03:43:b5:8e:40:e7:1d:48:58:8f:4e:a3:a4 root@test 2026-03-21T15:26:15.871 INFO:tasks.qemu.client.0.vm05.stdout:The key's randomart image is: 2026-03-21T15:26:15.872 INFO:tasks.qemu.client.0.vm05.stdout:+--[ RSA 2048]----+ 2026-03-21T15:26:15.873 INFO:tasks.qemu.client.0.vm05.stdout:| +o. | 2026-03-21T15:26:15.874 INFO:tasks.qemu.client.0.vm05.stdout:| o o+. | 2026-03-21T15:26:15.875 INFO:tasks.qemu.client.0.vm05.stdout:|..o=.o. | 2026-03-21T15:26:15.877 INFO:tasks.qemu.client.0.vm05.stdout:|+ =.o. | 2026-03-21T15:26:15.878 INFO:tasks.qemu.client.0.vm05.stdout:|E+ + . S | 2026-03-21T15:26:15.879 INFO:tasks.qemu.client.0.vm05.stdout:| + . + | 2026-03-21T15:26:15.880 INFO:tasks.qemu.client.0.vm05.stdout:| o . . | 2026-03-21T15:26:15.881 INFO:tasks.qemu.client.0.vm05.stdout:| .. . | 2026-03-21T15:26:15.882 INFO:tasks.qemu.client.0.vm05.stdout:| oo. | 2026-03-21T15:26:15.882 INFO:tasks.qemu.client.0.vm05.stdout:+-----------------+ 2026-03-21T15:26:15.898 INFO:tasks.qemu.client.0.vm05.stdout:Generating public/private dsa key pair. 2026-03-21T15:26:15.901 INFO:tasks.qemu.client.0.vm05.stdout:Your identification has been saved in /etc/ssh/ssh_host_dsa_key. 2026-03-21T15:26:15.904 INFO:tasks.qemu.client.0.vm05.stdout:Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub. 2026-03-21T15:26:15.907 INFO:tasks.qemu.client.0.vm05.stdout:The key fingerprint is: 2026-03-21T15:26:15.910 INFO:tasks.qemu.client.0.vm05.stdout:40:45:34:46:17:41:cb:a7:3d:9d:e5:9c:9d:5b:f8:39 root@test 2026-03-21T15:26:15.911 INFO:tasks.qemu.client.0.vm05.stdout:The key's randomart image is: 2026-03-21T15:26:15.913 INFO:tasks.qemu.client.0.vm05.stdout:+--[ DSA 1024]----+ 2026-03-21T15:26:15.913 INFO:tasks.qemu.client.0.vm05.stdout:| .=B.=o | 2026-03-21T15:26:15.914 INFO:tasks.qemu.client.0.vm05.stdout:| . . + . | 2026-03-21T15:26:15.915 INFO:tasks.qemu.client.0.vm05.stdout:| . o . .| 2026-03-21T15:26:15.917 INFO:tasks.qemu.client.0.vm05.stdout:| . + . *+| 2026-03-21T15:26:15.917 INFO:tasks.qemu.client.0.vm05.stdout:| S . o +o=| 2026-03-21T15:26:15.918 INFO:tasks.qemu.client.0.vm05.stdout:| . .+| 2026-03-21T15:26:15.919 INFO:tasks.qemu.client.0.vm05.stdout:| E.| 2026-03-21T15:26:15.920 INFO:tasks.qemu.client.0.vm05.stdout:| .| 2026-03-21T15:26:15.921 INFO:tasks.qemu.client.0.vm05.stdout:| | 2026-03-21T15:26:15.923 INFO:tasks.qemu.client.0.vm05.stdout:+-----------------+ 2026-03-21T15:26:15.923 INFO:tasks.qemu.client.0.vm05.stdout:Generating public/private ecdsa key pair. 2026-03-21T15:26:15.925 INFO:tasks.qemu.client.0.vm05.stdout:Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key. 2026-03-21T15:26:15.927 INFO:tasks.qemu.client.0.vm05.stdout:Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub. 2026-03-21T15:26:15.928 INFO:tasks.qemu.client.0.vm05.stdout:The key fingerprint is: 2026-03-21T15:26:15.930 INFO:tasks.qemu.client.0.vm05.stdout:2a:9c:1e:87:cc:5c:49:cb:a6:c6:7c:c4:97:61:bc:d1 root@test 2026-03-21T15:26:15.932 INFO:tasks.qemu.client.0.vm05.stdout:The key's randomart image is: 2026-03-21T15:26:15.936 INFO:tasks.qemu.client.0.vm05.stdout:+--[ECDSA 256]---+ 2026-03-21T15:26:15.938 INFO:tasks.qemu.client.0.vm05.stdout:| | 2026-03-21T15:26:15.939 INFO:tasks.qemu.client.0.vm05.stdout:| . . | 2026-03-21T15:26:15.940 INFO:tasks.qemu.client.0.vm05.stdout:| . = E | 2026-03-21T15:26:15.941 INFO:tasks.qemu.client.0.vm05.stdout:| + + = | 2026-03-21T15:26:15.942 INFO:tasks.qemu.client.0.vm05.stdout:| O S | 2026-03-21T15:26:15.943 INFO:tasks.qemu.client.0.vm05.stdout:| B B o | 2026-03-21T15:26:15.944 INFO:tasks.qemu.client.0.vm05.stdout:| # + | 2026-03-21T15:26:15.945 INFO:tasks.qemu.client.0.vm05.stdout:| o = | 2026-03-21T15:26:15.946 INFO:tasks.qemu.client.0.vm05.stdout:| . | 2026-03-21T15:26:15.947 INFO:tasks.qemu.client.0.vm05.stdout:+-----------------+ 2026-03-21T15:26:16.045 INFO:tasks.qemu.client.0.vm05.stdout:Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd 2026-03-21T15:26:16.176 INFO:tasks.qemu.client.0.vm05.stdout: * Starting AppArmor profiles  [ OK ] 2026-03-21T15:26:16.202 INFO:tasks.qemu.client.0.vm05.stdout:landscape-client is not configured, please run landscape-config. 2026-03-21T15:26:16.218 INFO:tasks.qemu.client.0.vm05.stdout: * Stopping System V initialisation compatibility[ OK ] 2026-03-21T15:26:16.228 INFO:tasks.qemu.client.0.vm05.stdout: * Starting System V runlevel compatibility[ OK ] 2026-03-21T15:26:16.233 INFO:tasks.qemu.client.0.vm05.stdout: * Starting automatic crash report generation[ OK ] 2026-03-21T15:26:16.233 INFO:tasks.qemu.client.0.vm05.stdout: * Starting deferred execution scheduler[ OK ] 2026-03-21T15:26:16.234 INFO:tasks.qemu.client.0.vm05.stdout: * Starting regular background program processing daemon[ OK ] 2026-03-21T15:26:16.236 INFO:tasks.qemu.client.0.vm05.stdout: * Starting ACPI daemon[ OK ] 2026-03-21T15:26:16.236 INFO:tasks.qemu.client.0.vm05.stdout: * Starting save kernel messages[ OK ] 2026-03-21T15:26:16.265 INFO:tasks.qemu.client.0.vm05.stdout: * Starting CPU interrupts balancing daemon[ OK ] 2026-03-21T15:26:16.277 INFO:tasks.qemu.client.0.vm05.stdout: * Stopping save kernel messages[ OK ] 2026-03-21T15:26:16.303 INFO:tasks.qemu.client.0.vm05.stdout: * Stopping System V runlevel compatibility[ OK ] 2026-03-21T15:26:16.405 INFO:tasks.qemu.client.0.vm05.stdout:Generating locales... 2026-03-21T15:26:16.780 INFO:tasks.qemu.client.0.vm05.stdout: en_US.UTF-8... done 2026-03-21T15:26:16.784 INFO:tasks.qemu.client.0.vm05.stdout:Generation complete. 2026-03-21T15:26:17.094 INFO:tasks.qemu.client.0.vm05.stdout:passwd: password expiry information changed. 2026-03-21T15:26:17.275 INFO:tasks.qemu.client.0.vm05.stdout:Ign http://old-releases.ubuntu.com precise InRelease 2026-03-21T15:26:17.313 INFO:tasks.qemu.client.0.vm05.stdout:Get:1 http://old-releases.ubuntu.com precise-updates InRelease [55.7 kB] 2026-03-21T15:26:17.435 INFO:tasks.qemu.client.0.vm05.stdout:Get:2 http://old-releases.ubuntu.com precise-security InRelease [55.7 kB] 2026-03-21T15:26:17.493 INFO:tasks.qemu.client.0.vm05.stdout:Get:3 http://old-releases.ubuntu.com precise Release.gpg [198 B] 2026-03-21T15:26:17.532 INFO:tasks.qemu.client.0.vm05.stdout:Get:4 http://old-releases.ubuntu.com precise-updates/main Sources [815 kB] 2026-03-21T15:26:17.710 INFO:tasks.qemu.client.0.vm05.stdout:Get:5 http://old-releases.ubuntu.com precise-updates/universe Sources [208 kB] 2026-03-21T15:26:17.760 INFO:tasks.qemu.client.0.vm05.stdout:Get:6 http://old-releases.ubuntu.com precise-updates/main amd64 Packages [1104 kB] 2026-03-21T15:26:17.830 INFO:tasks.qemu.client.0.vm05.stdout:Get:7 http://old-releases.ubuntu.com precise-updates/universe amd64 Packages [406 kB] 2026-03-21T15:26:17.901 INFO:tasks.qemu.client.0.vm05.stdout:Get:8 http://old-releases.ubuntu.com precise-updates/main i386 Packages [1111 kB] 2026-03-21T15:26:17.959 INFO:tasks.qemu.client.0.vm05.stdout:Get:9 http://old-releases.ubuntu.com precise-updates/universe i386 Packages [422 kB] 2026-03-21T15:26:18.010 INFO:tasks.qemu.client.0.vm05.stdout:Get:10 http://old-releases.ubuntu.com precise-updates/main TranslationIndex [208 B] 2026-03-21T15:26:18.051 INFO:tasks.qemu.client.0.vm05.stdout:Get:11 http://old-releases.ubuntu.com precise-updates/universe TranslationIndex [205 B] 2026-03-21T15:26:18.094 INFO:tasks.qemu.client.0.vm05.stdout:Get:12 http://old-releases.ubuntu.com precise-security/main Sources [242 kB] 2026-03-21T15:26:18.141 INFO:tasks.qemu.client.0.vm05.stdout:Get:13 http://old-releases.ubuntu.com precise-security/universe Sources [88.1 kB] 2026-03-21T15:26:18.185 INFO:tasks.qemu.client.0.vm05.stdout:Get:14 http://old-releases.ubuntu.com precise-security/main amd64 Packages [584 kB] 2026-03-21T15:26:18.235 INFO:tasks.qemu.client.0.vm05.stdout:Get:15 http://old-releases.ubuntu.com precise-security/universe amd64 Packages [213 kB] 2026-03-21T15:26:18.278 INFO:tasks.qemu.client.0.vm05.stdout:Get:16 http://old-releases.ubuntu.com precise-security/main i386 Packages [589 kB] 2026-03-21T15:26:18.328 INFO:tasks.qemu.client.0.vm05.stdout:Get:17 http://old-releases.ubuntu.com precise-security/universe i386 Packages [228 kB] 2026-03-21T15:26:18.369 INFO:tasks.qemu.client.0.vm05.stdout:Get:18 http://old-releases.ubuntu.com precise-security/main TranslationIndex [208 B] 2026-03-21T15:26:18.411 INFO:tasks.qemu.client.0.vm05.stdout:Get:19 http://old-releases.ubuntu.com precise-security/universe TranslationIndex [205 B] 2026-03-21T15:26:18.450 INFO:tasks.qemu.client.0.vm05.stdout:Get:20 http://old-releases.ubuntu.com precise Release [49.6 kB] 2026-03-21T15:26:18.488 INFO:tasks.qemu.client.0.vm05.stdout:Get:21 http://old-releases.ubuntu.com precise-updates/main Translation-en [353 kB] 2026-03-21T15:26:18.553 INFO:tasks.qemu.client.0.vm05.stdout:Get:22 http://old-releases.ubuntu.com precise-updates/universe Translation-en [176 kB] 2026-03-21T15:26:18.600 INFO:tasks.qemu.client.0.vm05.stdout:Get:23 http://old-releases.ubuntu.com precise-security/main Translation-en [202 kB] 2026-03-21T15:26:18.640 INFO:tasks.qemu.client.0.vm05.stdout:Get:24 http://old-releases.ubuntu.com precise-security/universe Translation-en [96.4 kB] 2026-03-21T15:26:18.679 INFO:tasks.qemu.client.0.vm05.stdout:Get:25 http://old-releases.ubuntu.com precise/main Sources [934 kB] 2026-03-21T15:26:18.737 INFO:tasks.qemu.client.0.vm05.stdout:Get:26 http://old-releases.ubuntu.com precise/universe Sources [5019 kB] 2026-03-21T15:26:18.861 INFO:tasks.qemu.client.0.vm05.stdout:Get:27 http://old-releases.ubuntu.com precise/main amd64 Packages [1273 kB] 2026-03-21T15:26:18.921 INFO:tasks.qemu.client.0.vm05.stdout:Get:28 http://old-releases.ubuntu.com precise/universe amd64 Packages [4786 kB] 2026-03-21T15:26:19.044 INFO:tasks.qemu.client.0.vm05.stdout:Get:29 http://old-releases.ubuntu.com precise/main i386 Packages [1274 kB] 2026-03-21T15:26:19.105 INFO:tasks.qemu.client.0.vm05.stdout:Get:30 http://old-releases.ubuntu.com precise/universe i386 Packages [4796 kB] 2026-03-21T15:26:19.213 INFO:tasks.qemu.client.0.vm05.stdout:Get:31 http://old-releases.ubuntu.com precise/main TranslationIndex [3706 B] 2026-03-21T15:26:19.250 INFO:tasks.qemu.client.0.vm05.stdout:Get:32 http://old-releases.ubuntu.com precise/universe TranslationIndex [2922 B] 2026-03-21T15:26:19.293 INFO:tasks.qemu.client.0.vm05.stdout:Get:33 http://old-releases.ubuntu.com precise/main Translation-en [726 kB] 2026-03-21T15:26:19.351 INFO:tasks.qemu.client.0.vm05.stdout:Get:34 http://old-releases.ubuntu.com precise/universe Translation-en [3341 kB] 2026-03-21T15:26:22.596 INFO:tasks.qemu.client.0.vm05.stdout:Fetched 29.2 MB in 5s (5320 kB/s) 2026-03-21T15:26:23.607 INFO:tasks.qemu.client.0.vm05.stdout:Reading package lists... 2026-03-21T15:26:23.611 INFO:tasks.qemu.client.0.vm05.stdout:Reading package lists... 2026-03-21T15:26:23.755 INFO:tasks.qemu.client.0.vm05.stdout:Building dependency tree... 2026-03-21T15:26:23.755 INFO:tasks.qemu.client.0.vm05.stdout:Reading state information... 2026-03-21T15:26:23.818 INFO:tasks.qemu.client.0.vm05.stdout:The following packages will be upgraded: 2026-03-21T15:26:23.819 INFO:tasks.qemu.client.0.vm05.stdout: ca-certificates libssl1.0.0 2026-03-21T15:26:23.906 INFO:tasks.qemu.client.0.vm05.stdout:2 upgraded, 0 newly installed, 0 to remove and 226 not upgraded. 2026-03-21T15:26:23.907 INFO:tasks.qemu.client.0.vm05.stdout:Need to get 1233 kB of archives. 2026-03-21T15:26:23.909 INFO:tasks.qemu.client.0.vm05.stdout:After this operation, 108 kB of additional disk space will be used. 2026-03-21T15:26:23.912 INFO:tasks.qemu.client.0.vm05.stdout:Get:1 http://old-releases.ubuntu.com/ubuntu/ precise-updates/main libssl1.0.0 amd64 1.0.1-4ubuntu5.45 [1055 kB] 2026-03-21T15:26:24.195 INFO:tasks.qemu.client.0.vm05.stdout:Get:2 http://old-releases.ubuntu.com/ubuntu/ precise-updates/main ca-certificates all 20190110~12.04.1 [179 kB] 2026-03-21T15:26:24.259 INFO:tasks.qemu.client.0.vm05.stdout:dpkg-preconfigure: unable to re-open stdin: No such file or directory 2026-03-21T15:26:24.270 INFO:tasks.qemu.client.0.vm05.stdout:Fetched 1233 kB in 1s (1076 kB/s) 2026-03-21T15:26:24.507 INFO:tasks.qemu.client.0.vm05.stdout:(Reading database ... 36182 files and directories currently installed.) 2026-03-21T15:26:24.510 INFO:tasks.qemu.client.0.vm05.stdout:Preparing to replace libssl1.0.0 1.0.1-4ubuntu5.5 (using .../libssl1.0.0_1.0.1-4ubuntu5.45_amd64.deb) ... 2026-03-21T15:26:24.526 INFO:tasks.qemu.client.0.vm05.stdout:Unpacking replacement libssl1.0.0 ... 2026-03-21T15:26:24.606 INFO:tasks.qemu.client.0.vm05.stdout:Setting up libssl1.0.0 (1.0.1-4ubuntu5.45) ... 2026-03-21T15:26:24.718 INFO:tasks.qemu.client.0.vm05.stdout:Processing triggers for libc-bin ... 2026-03-21T15:26:24.723 INFO:tasks.qemu.client.0.vm05.stdout:ldconfig deferred processing now taking place 2026-03-21T15:26:24.901 INFO:tasks.qemu.client.0.vm05.stdout:(Reading database ... 36182 files and directories currently installed.) 2026-03-21T15:26:24.906 INFO:tasks.qemu.client.0.vm05.stdout:Preparing to replace ca-certificates 20111211 (using .../ca-certificates_20190110~12.04.1_all.deb) ... 2026-03-21T15:26:24.922 INFO:tasks.qemu.client.0.vm05.stdout:Unpacking replacement ca-certificates ... 2026-03-21T15:26:25.013 INFO:tasks.qemu.client.0.vm05.stdout:Processing triggers for man-db ... 2026-03-21T15:26:25.421 INFO:tasks.qemu.client.0.vm05.stdout:Setting up ca-certificates (20190110~12.04.1) ... 2026-03-21T15:26:27.530 INFO:tasks.qemu.client.0.vm05.stdout:Updating certificates in /etc/ssl/certs... 73 added, 78 removed; done. 2026-03-21T15:26:27.532 INFO:tasks.qemu.client.0.vm05.stdout:Running hooks in /etc/ca-certificates/update.d....done. 2026-03-21T15:26:28.804 INFO:tasks.qemu.client.0.vm05.stdout:Updating certificates in /etc/ssl/certs... 0 added, 1 removed; done. 2026-03-21T15:26:28.806 INFO:tasks.qemu.client.0.vm05.stdout:Running hooks in /etc/ca-certificates/update.d....done. 2026-03-21T15:26:28.811 INFO:tasks.qemu.client.0.vm05.stdout:Reading package lists... 2026-03-21T15:26:28.964 INFO:tasks.qemu.client.0.vm05.stdout:Building dependency tree... 2026-03-21T15:26:28.964 INFO:tasks.qemu.client.0.vm05.stdout:Reading state information... 2026-03-21T15:26:29.045 INFO:tasks.qemu.client.0.vm05.stdout:The following extra packages will be installed: 2026-03-21T15:26:29.047 INFO:tasks.qemu.client.0.vm05.stdout: libgssglue1 libnfsidmap2 libtirpc1 rpcbind 2026-03-21T15:26:29.049 INFO:tasks.qemu.client.0.vm05.stdout:The following NEW packages will be installed: 2026-03-21T15:26:29.051 INFO:tasks.qemu.client.0.vm05.stdout: libgssglue1 libnfsidmap2 libtirpc1 nfs-common rpcbind 2026-03-21T15:26:29.291 INFO:tasks.qemu.client.0.vm05.stdout:0 upgraded, 5 newly installed, 0 to remove and 226 not upgraded. 2026-03-21T15:26:29.292 INFO:tasks.qemu.client.0.vm05.stdout:Need to get 424 kB of archives. 2026-03-21T15:26:29.295 INFO:tasks.qemu.client.0.vm05.stdout:After this operation, 1326 kB of additional disk space will be used. 2026-03-21T15:26:29.299 INFO:tasks.qemu.client.0.vm05.stdout:Get:1 http://old-releases.ubuntu.com/ubuntu/ precise-updates/main libgssglue1 amd64 0.3-4ubuntu0.1 [22.5 kB] 2026-03-21T15:26:29.502 INFO:tasks.qemu.client.0.vm05.stdout:Get:2 http://old-releases.ubuntu.com/ubuntu/ precise-updates/main libtirpc1 amd64 0.2.2-5ubuntu0.1 [85.0 kB] 2026-03-21T15:26:29.786 INFO:tasks.qemu.client.0.vm05.stdout:Get:3 http://old-releases.ubuntu.com/ubuntu/ precise-updates/main rpcbind amd64 0.2.0-7ubuntu1.3 [43.1 kB] 2026-03-21T15:26:29.918 INFO:tasks.qemu.client.0.vm05.stdout:Get:4 http://old-releases.ubuntu.com/ubuntu/ precise/main libnfsidmap2 amd64 0.25-1ubuntu2 [32.0 kB] 2026-03-21T15:26:30.044 INFO:tasks.qemu.client.0.vm05.stdout:Get:5 http://old-releases.ubuntu.com/ubuntu/ precise-updates/main nfs-common amd64 1:1.2.5-3ubuntu3.2 [241 kB] 2026-03-21T15:26:30.279 INFO:tasks.qemu.client.0.vm05.stdout:dpkg-preconfigure: unable to re-open stdin: No such file or directory 2026-03-21T15:26:30.288 INFO:tasks.qemu.client.0.vm05.stdout:Fetched 424 kB in 1s (360 kB/s) 2026-03-21T15:26:30.326 INFO:tasks.qemu.client.0.vm05.stdout:Selecting previously unselected package libgssglue1. 2026-03-21T15:26:30.331 INFO:tasks.qemu.client.0.vm05.stdout:(Reading database ... 36174 files and directories currently installed.) 2026-03-21T15:26:30.335 INFO:tasks.qemu.client.0.vm05.stdout:Unpacking libgssglue1 (from .../libgssglue1_0.3-4ubuntu0.1_amd64.deb) ... 2026-03-21T15:26:30.376 INFO:tasks.qemu.client.0.vm05.stdout:Selecting previously unselected package libtirpc1. 2026-03-21T15:26:30.381 INFO:tasks.qemu.client.0.vm05.stdout:Unpacking libtirpc1 (from .../libtirpc1_0.2.2-5ubuntu0.1_amd64.deb) ... 2026-03-21T15:26:30.432 INFO:tasks.qemu.client.0.vm05.stdout:Selecting previously unselected package rpcbind. 2026-03-21T15:26:30.434 INFO:tasks.qemu.client.0.vm05.stdout:Unpacking rpcbind (from .../rpcbind_0.2.0-7ubuntu1.3_amd64.deb) ... 2026-03-21T15:26:30.492 INFO:tasks.qemu.client.0.vm05.stdout:Selecting previously unselected package libnfsidmap2. 2026-03-21T15:26:30.496 INFO:tasks.qemu.client.0.vm05.stdout:Unpacking libnfsidmap2 (from .../libnfsidmap2_0.25-1ubuntu2_amd64.deb) ... 2026-03-21T15:26:30.538 INFO:tasks.qemu.client.0.vm05.stdout:Selecting previously unselected package nfs-common. 2026-03-21T15:26:30.541 INFO:tasks.qemu.client.0.vm05.stdout:Unpacking nfs-common (from .../nfs-common_1%3a1.2.5-3ubuntu3.2_amd64.deb) ... 2026-03-21T15:26:30.598 INFO:tasks.qemu.client.0.vm05.stdout:Processing triggers for man-db ... 2026-03-21T15:26:30.860 INFO:tasks.qemu.client.0.vm05.stdout:Processing triggers for ureadahead ... 2026-03-21T15:26:30.891 INFO:tasks.qemu.client.0.vm05.stdout:Setting up libgssglue1 (0.3-4ubuntu0.1) ... 2026-03-21T15:26:30.918 INFO:tasks.qemu.client.0.vm05.stdout:Setting up libtirpc1 (0.2.2-5ubuntu0.1) ... 2026-03-21T15:26:30.941 INFO:tasks.qemu.client.0.vm05.stdout:Setting up rpcbind (0.2.0-7ubuntu1.3) ... 2026-03-21T15:26:30.978 INFO:tasks.qemu.client.0.vm05.stdout: Removing any system startup links for /etc/init.d/rpcbind ... 2026-03-21T15:26:31.009 INFO:tasks.qemu.client.0.vm05.stdout:portmap start/running, process 8349 2026-03-21T15:26:31.015 INFO:tasks.qemu.client.0.vm05.stdout:Setting up libnfsidmap2 (0.25-1ubuntu2) ... 2026-03-21T15:26:31.034 INFO:tasks.qemu.client.0.vm05.stdout:Setting up nfs-common (1:1.2.5-3ubuntu3.2) ... 2026-03-21T15:26:31.181 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:31.182 INFO:tasks.qemu.client.0.vm05.stdout:Creating config file /etc/idmapd.conf with new version 2026-03-21T15:26:31.314 INFO:tasks.qemu.client.0.vm05.stdout: 2026-03-21T15:26:31.315 INFO:tasks.qemu.client.0.vm05.stdout:Creating config file /etc/default/nfs-common with new version 2026-03-21T15:26:31.384 INFO:tasks.qemu.client.0.vm05.stdout:Adding system user `statd' (UID 106) ... 2026-03-21T15:26:31.386 INFO:tasks.qemu.client.0.vm05.stdout:Adding new user `statd' (UID 106) with group `nogroup' ... 2026-03-21T15:26:31.416 INFO:tasks.qemu.client.0.vm05.stdout:Not creating home directory `/var/lib/nfs'. 2026-03-21T15:26:31.460 INFO:tasks.qemu.client.0.vm05.stdout:statd start/running, process 8583 2026-03-21T15:26:31.484 INFO:tasks.qemu.client.0.vm05.stdout:gssd stop/post-stop, process 8614 2026-03-21T15:26:32.008 INFO:tasks.qemu.client.0.vm05.stdout:idmapd start/running, process 8654 2026-03-21T15:26:32.015 INFO:tasks.qemu.client.0.vm05.stdout:Processing triggers for libc-bin ... 2026-03-21T15:26:32.018 INFO:tasks.qemu.client.0.vm05.stdout:ldconfig deferred processing now taking place 2026-03-21T15:26:32.331 INFO:tasks.qemu.client.0.vm05.stdout:mount.nfs: timeout set for Sat Mar 21 15:28:32 2026 2026-03-21T15:26:32.334 INFO:tasks.qemu.client.0.vm05.stdout:mount.nfs: trying text-based options 'proto=tcp,vers=4,addr=10.0.2.2,clientaddr=10.0.2.15' 2026-03-21T15:26:32.335 INFO:tasks.qemu.client.0.vm05.stdout:10.0.2.2:/export/client.0 on /mnt/log type nfs (rw,proto=tcp) 2026-03-21T15:26:32.355 INFO:tasks.qemu.client.0.vm05.stdout:mount: block device /dev/sr0 is write-protected, mounting read-only 2026-03-21T15:34:26.155 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:26.153+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:26.155 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:26.153+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:27.120 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:27.118+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:27.120 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:27.118+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:28.169 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:28.168+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:28.170 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:28.168+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:29.160 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:29.159+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:29.161 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:29.159+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:30.164 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:30.162+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:30.164 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:30.162+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:31.084 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:31.083+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:31.084 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:31.083+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:31.145 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:31.143+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:31.145 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:31.143+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:32.100 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:32.098+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:32.100 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:32.098+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:32.128 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:32.127+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:32.128 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:32.127+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:33.116 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:33.115+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:33.117 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:33.115+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:33.132 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:33.130+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:33.132 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:33.130+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:34.150 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:34.149+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:34.150 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:34.149+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:34.176 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:34.174+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:34.176 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:34.174+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:35.137 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:35.136+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:35.137 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:35.136+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:35.149 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:35.148+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:35.149 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:35.148+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:36.164 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:36.162+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:36.164 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:36.162+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:36.171 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:36.170+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:36.171 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:36.170+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:37.164 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:37.162+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:37.164 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:37.162+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:37.182 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:37.180+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:37.182 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:37.180+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:38.182 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:38.180+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:38.182 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:38.180+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:38.221 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:38.219+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:38.221 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:38.219+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:39.149 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:39.147+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:39.149 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:39.147+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:39.213 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:39.212+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:39.213 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:39.212+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:40.167 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:40.165+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:40.167 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:40.165+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:40.186 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:40.184+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:40.186 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:40.184+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:41.120 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:41.118+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:41.120 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:41.118+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:41.156 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:41.154+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:41.156 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:41.154+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:42.124 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:42.122+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:42.124 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:42.122+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:42.174 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:42.172+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:42.174 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:42.172+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:43.172 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:43.170+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:43.172 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:43.170+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:43.215 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:43.213+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:43.215 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:43.213+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:44.166 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:44.164+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:44.166 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:44.164+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:44.239 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:44.238+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:44.239 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:44.238+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:44.368 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:34:44.366+0000 7f4e00c49640 -1 mon.a@0(probing) e1 get_health_metrics reporting 2 slow ops, oldest is osd_beacon(pgs [2.6,2.4,2.1] lec 20 last_purged_snaps_scrub 2026-03-21T14:43:16.180926+0000 osd_beacon_report_interval 300 v20) 2026-03-21T15:34:45.163 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:45.161+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:45.163 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:45.161+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:45.222 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:45.221+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:45.222 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:45.221+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:46.184 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:46.182+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:46.184 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:46.182+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:46.227 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:46.226+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:46.227 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:46.226+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:47.185 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:47.184+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:47.186 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:47.184+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:47.242 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:47.240+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:47.242 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:47.240+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:48.149 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:48.147+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:48.149 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:48.147+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:48.194 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:48.192+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:48.194 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:48.192+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:49.149 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:49.148+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:49.149 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:49.148+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:49.167 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:49.165+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:49.167 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:49.165+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:49.368 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:34:49.366+0000 7f4e00c49640 -1 mon.a@0(probing) e1 get_health_metrics reporting 19 slow ops, oldest is osd_beacon(pgs [2.6,2.4,2.1] lec 20 last_purged_snaps_scrub 2026-03-21T14:43:16.180926+0000 osd_beacon_report_interval 300 v20) 2026-03-21T15:34:50.187 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:50.185+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:50.187 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:50.185+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:50.189 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:50.188+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:50.189 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:50.188+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:50.189 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:50.188+0000 7f91f119e640 -1 osd.1 20 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.4297.0:46461 2.6 2:7a6890a3:::rbd_header.10b338175324:head [call rbd.metadata_set in=433b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e20) 2026-03-21T15:34:51.178 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:51.177+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.397194+0000 front 2026-03-21T15:34:04.396992+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:51.179 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:51.177+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:51.184 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:51.183+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:05.351843+0000 front 2026-03-21T15:34:05.351835+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:51.184 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:51.183+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:51.184 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:51.183+0000 7f91f119e640 -1 osd.1 20 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.4297.0:46461 2.6 2:7a6890a3:::rbd_header.10b338175324:head [call rbd.metadata_set in=433b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e20) 2026-03-21T15:34:51.576 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:34:51.565+0000 7fc723d93640 -1 osd.2 20 heartbeat_check: no reply from 192.168.123.101:6812 osd.0 since back 2026-03-21T15:34:02.142699+0000 front 2026-03-21T15:34:02.142855+0000 (oldest deadline 2026-03-21T15:34:30.441822+0000) 2026-03-21T15:34:51.577 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:34:51.565+0000 7fc723d93640 -1 osd.2 20 heartbeat_check: no reply from 192.168.123.101:6804 osd.1 since back 2026-03-21T15:34:02.142877+0000 front 2026-03-21T15:34:02.142924+0000 (oldest deadline 2026-03-21T15:34:30.441822+0000) 2026-03-21T15:34:51.577 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:34:51.565+0000 7fc723d93640 -1 osd.2 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:02.145786+0000 front 2026-03-21T15:34:02.143414+0000 (oldest deadline 2026-03-21T15:34:30.441822+0000) 2026-03-21T15:34:51.577 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:34:51.565+0000 7fc23112e640 -1 osd.3 20 heartbeat_check: no reply from 192.168.123.101:6812 osd.0 since back 2026-03-21T15:34:04.802493+0000 front 2026-03-21T15:34:04.802240+0000 (oldest deadline 2026-03-21T15:34:34.897110+0000) 2026-03-21T15:34:51.577 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:34:51.565+0000 7fc23112e640 -1 osd.3 20 heartbeat_check: no reply from 192.168.123.101:6804 osd.1 since back 2026-03-21T15:34:04.802261+0000 front 2026-03-21T15:34:04.802307+0000 (oldest deadline 2026-03-21T15:34:34.897110+0000) 2026-03-21T15:34:51.577 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:34:51.565+0000 7fc23112e640 -1 osd.3 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:04.802409+0000 front 2026-03-21T15:34:04.802460+0000 (oldest deadline 2026-03-21T15:34:34.897110+0000) 2026-03-21T15:34:51.577 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:34:51.567+0000 7fc23112e640 -1 osd.3 20 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.4293.0:33116 2.0 2:0c14d48c:::rbd_data.10af391361bd.0000000000000814:head [write 1736704~1048576 in=1048576b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e20) 2026-03-21T15:34:52.145 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:52.143+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:52.189 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:52.187+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:53.122 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:53.121+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:53.141 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:53.139+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:54.149 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:54.147+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:54.171 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:54.169+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:55.109 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:55.108+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:55.196 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:55.194+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:56.154 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:56.152+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:56.188 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:56.187+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:57.107 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:57.105+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:57.212 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:57.211+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:58.124 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:58.122+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:58.262 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:58.260+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:59.083 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:34:59.081+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:34:59.261 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:34:59.259+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:34:59.369 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:34:59.367+0000 7f4e00c49640 -1 mon.a@0(leader) e1 get_health_metrics reporting 2 slow ops, oldest is log(1 entries from seq 126 at 2026-03-21T15:34:27.345519+0000) 2026-03-21T15:35:00.070 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:00.068+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:00.310 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:00.309+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:01.035 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:01.033+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:01.287 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:01.286+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:02.053 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:02.051+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:02.253 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:02.251+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:03.010 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:03.008+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:03.207 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:03.206+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:04.018 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:04.016+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:04.200 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:04.198+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:04.993 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:04.991+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:05.209 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:05.208+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:05.996 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:05.994+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:06.253 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:06.252+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:07.038 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:07.036+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:07.213 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:07.212+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:08.074 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:08.073+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:08.205 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:08.203+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:09.037 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:09.035+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:09.195 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:09.193+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:10.007 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:10.006+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:10.241 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:10.240+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:11.048 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:11.046+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:11.217 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:11.215+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:12.081 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:12.079+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:12.262 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:12.260+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:13.079 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:13.077+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:13.079 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:13.077+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:13.231 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:13.230+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:14.098 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:14.096+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:14.098 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:14.096+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:14.249 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:14.247+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:14.249 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:14.247+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:15.135 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:15.133+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:15.135 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:15.133+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:15.258 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:15.256+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:15.258 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:15.256+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:16.161 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:16.159+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:16.161 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:16.159+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:16.251 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:16.250+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:16.252 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:16.250+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:17.204 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:17.203+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:17.204 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:17.203+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:17.204 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:17.203+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:17.204 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:17.203+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:18.213 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:18.211+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:18.213 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:18.211+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:18.253 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:18.251+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:18.253 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:18.251+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:19.206 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:19.204+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:19.206 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:19.204+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:19.250 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:19.249+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:19.250 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:19.249+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:19.370 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:35:19.368+0000 7f4e00c49640 -1 mon.a@0(probing) e1 get_health_metrics reporting 2 slow ops, oldest is osd_failure(failed timeout osd.3 [v2:192.168.123.105:6800/1399269682,v1:192.168.123.105:6801/1399269682] for 44sec e20 v20) 2026-03-21T15:35:20.189 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:20.188+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:20.189 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:20.188+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:20.269 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:20.268+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:20.269 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:20.268+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:21.198 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:21.196+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:21.198 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:21.196+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:21.306 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:21.304+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:21.306 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:21.304+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:22.224 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:22.223+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:22.224 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:22.223+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:22.262 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:22.260+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:22.262 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:22.260+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:23.227 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:23.225+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:23.227 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:23.225+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:23.240 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:23.238+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:23.240 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:23.238+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:24.224 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:24.223+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:24.224 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:24.223+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:24.281 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:24.280+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:24.282 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:24.280+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:24.370 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:35:24.369+0000 7f4e00c49640 -1 mon.a@0(probing) e1 get_health_metrics reporting 9 slow ops, oldest is osd_failure(failed timeout osd.3 [v2:192.168.123.105:6800/1399269682,v1:192.168.123.105:6801/1399269682] for 44sec e20 v20) 2026-03-21T15:35:25.235 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:25.233+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:25.235 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:25.233+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:25.250 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:25.248+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:25.250 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:25.248+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:26.186 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:26.184+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:26.186 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:26.184+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:26.210 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:26.208+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:26.210 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:26.208+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:27.227 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:27.225+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:27.227 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:27.225+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:27.254 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:27.252+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:27.254 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:27.252+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:28.190 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:28.188+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:28.190 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:28.188+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:28.237 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:28.235+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:28.237 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:28.235+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:29.204 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:29.203+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:29.204 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:29.203+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:29.221 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:29.220+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:29.221 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:29.220+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:29.371 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:35:29.369+0000 7f4e00c49640 -1 mon.a@0(probing) e1 get_health_metrics reporting 17 slow ops, oldest is osd_failure(failed timeout osd.3 [v2:192.168.123.105:6800/1399269682,v1:192.168.123.105:6801/1399269682] for 44sec e20 v20) 2026-03-21T15:35:30.175 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:30.174+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:30.175 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:30.174+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:30.200 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:30.199+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:30.200 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:30.199+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:31.147 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:31.146+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:31.147 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:31.146+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:31.203 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:31.202+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:31.203 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:31.202+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:32.108 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:32.107+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:32.108 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:32.107+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:32.108 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:32.107+0000 7f91f119e640 -1 osd.1 20 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.4297.0:46473 2.6 2:7a6890a3:::rbd_header.10b338175324:head [watch ping cookie 139786845004736] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e20) 2026-03-21T15:35:32.208 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:32.207+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:32.208 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:32.207+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:33.153 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:33.151+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:33.153 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:33.151+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:33.153 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:33.151+0000 7f91f119e640 -1 osd.1 20 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.4297.0:46473 2.6 2:7a6890a3:::rbd_header.10b338175324:head [watch ping cookie 139786845004736] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e20) 2026-03-21T15:35:33.188 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:33.186+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:33.188 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:33.186+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:34.147 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:34.146+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:34.147 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:34.146+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:34.147 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:34.146+0000 7f91f119e640 -1 osd.1 20 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.4297.0:46473 2.6 2:7a6890a3:::rbd_header.10b338175324:head [watch ping cookie 139786845004736] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e20) 2026-03-21T15:35:34.204 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:34.202+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:34.204 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:34.202+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:34.371 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:35:34.369+0000 7f4e00c49640 -1 mon.a@0(probing) e1 get_health_metrics reporting 23 slow ops, oldest is osd_failure(failed timeout osd.3 [v2:192.168.123.105:6800/1399269682,v1:192.168.123.105:6801/1399269682] for 44sec e20 v20) 2026-03-21T15:35:35.167 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:35.165+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:35.167 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:35.165+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:35.190 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:35.189+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:35.191 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:35.189+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:35.191 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:35.189+0000 7f91f119e640 -1 osd.1 20 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.4297.0:46473 2.6 2:7a6890a3:::rbd_header.10b338175324:head [watch ping cookie 139786845004736] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e20) 2026-03-21T15:35:36.150 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:36.148+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:36.150 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:36.148+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:36.211 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:36.208+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:36.211 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:36.208+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:36.211 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:36.209+0000 7f91f119e640 -1 osd.1 20 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.4297.0:46473 2.6 2:7a6890a3:::rbd_header.10b338175324:head [watch ping cookie 139786845004736] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e20) 2026-03-21T15:35:37.175 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:37.173+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:37.175 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:37.173+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:37.192 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:37.191+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:37.193 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:37.191+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:37.193 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:37.191+0000 7f91f119e640 -1 osd.1 20 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.4297.0:46473 2.6 2:7a6890a3:::rbd_header.10b338175324:head [watch ping cookie 139786845004736] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e20) 2026-03-21T15:35:38.188 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:38.186+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.571454+0000 front 2026-03-21T15:34:51.569982+0000 (oldest deadline 2026-03-21T15:35:12.750049+0000) 2026-03-21T15:35:38.188 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:38.186+0000 7f91f119e640 -1 osd.1 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:05.351857+0000 front 2026-03-21T15:34:05.351812+0000 (oldest deadline 2026-03-21T15:34:30.646823+0000) 2026-03-21T15:35:38.188 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:35:38.186+0000 7f91f119e640 -1 osd.1 20 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.4297.0:46473 2.6 2:7a6890a3:::rbd_header.10b338175324:head [watch ping cookie 139786845004736] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e20) 2026-03-21T15:35:38.222 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:38.220+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:34:51.569577+0000 front 2026-03-21T15:34:51.569692+0000 (oldest deadline 2026-03-21T15:35:13.288924+0000) 2026-03-21T15:35:38.222 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:38.220+0000 7f753d95a640 -1 osd.0 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:04.397076+0000 front 2026-03-21T15:34:04.397301+0000 (oldest deadline 2026-03-21T15:34:25.484966+0000) 2026-03-21T15:35:39.035 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:35:39.026+0000 7fc723d93640 -1 osd.2 20 heartbeat_check: no reply from 192.168.123.101:6812 osd.0 since back 2026-03-21T15:34:51.571152+0000 front 2026-03-21T15:34:51.571462+0000 (oldest deadline 2026-03-21T15:35:31.683177+0000) 2026-03-21T15:35:39.035 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:35:39.026+0000 7fc723d93640 -1 osd.2 20 heartbeat_check: no reply from 192.168.123.101:6804 osd.1 since back 2026-03-21T15:35:39.025562+0000 front 2026-03-21T15:34:51.571174+0000 (oldest deadline 2026-03-21T15:35:31.683177+0000) 2026-03-21T15:35:39.035 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:35:39.026+0000 7fc723d93640 -1 osd.2 20 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:34:51.571479+0000 front 2026-03-21T15:34:51.571143+0000 (oldest deadline 2026-03-21T15:35:31.683177+0000) 2026-03-21T15:35:39.372 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:35:39.370+0000 7f4e00c49640 -1 mon.a@0(leader) e1 get_health_metrics reporting 3 slow ops, oldest is log(1 entries from seq 163 at 2026-03-21T15:34:52.648539+0000) 2026-03-21T15:35:39.659 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:35:39.657+0000 7f753c157640 -1 osd.0 21 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T15:35:40.030 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:35:40.023+0000 7fc22f92b640 -1 osd.3 21 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T15:35:40.030 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:35:40.028+0000 7fc722590640 -1 osd.2 21 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T15:35:50.127 INFO:tasks.ceph.mon.b.vm05.stderr:2026-03-21T15:35:50.116+0000 7f7b6d8da640 -1 mon.b@1(peon).paxos(paxos updating c 2511..3124) lease_expire from mon.0 v2:192.168.123.101:3300/0 is 1.922996402s seconds in the past; mons are probably laggy (or possibly clocks are too skewed) 2026-03-21T15:36:14.144 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:14.143+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:14.144 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:14.143+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:14.189 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:14.188+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:14.189 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:14.188+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:15.156 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:15.154+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:15.156 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:15.154+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:15.188 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:15.187+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:15.188 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:15.187+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:16.194 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:16.193+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:16.194 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:16.193+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:16.198 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:16.197+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:16.198 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:16.197+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:17.156 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:17.154+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:17.156 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:17.154+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:17.241 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:17.239+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:17.241 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:17.239+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:18.133 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:18.132+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:18.133 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:18.132+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:18.234 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:18.232+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:18.234 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:18.232+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:19.157 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:19.156+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:19.157 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:19.156+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:19.231 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:19.230+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:19.231 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:19.230+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:20.110 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:20.110+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:20.110 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:20.110+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:20.188 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:20.187+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:20.188 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:20.187+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:21.097 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:21.096+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:21.097 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:21.096+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:21.153 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:21.152+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:21.153 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:21.152+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:22.084 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:22.084+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:22.085 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:22.084+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:22.111 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:22.110+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:22.111 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:22.110+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:23.078 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:23.077+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:23.079 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:23.077+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:23.151 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:23.150+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:23.151 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:23.150+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:24.097 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:24.096+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:24.097 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:24.096+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:24.178 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:24.177+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:24.178 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:24.177+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:24.377 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:36:24.375+0000 7f4e00c49640 -1 mon.a@0(probing) e1 get_health_metrics reporting 2 slow ops, oldest is monmgrreport(gid 4106, 2 checks, 0 progress events) 2026-03-21T15:36:25.077 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:25.076+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:25.077 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:25.076+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:25.196 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:25.196+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:25.196 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:25.196+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:26.052 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:26.051+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:26.052 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:26.051+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:26.172 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:26.171+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:26.172 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:26.171+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:27.085 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:27.084+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:27.085 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:27.084+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:27.153 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:27.152+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:27.153 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:27.152+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:28.122 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:28.122+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:28.123 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:28.122+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:28.190 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:28.189+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:28.190 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:28.189+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:29.164 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:29.163+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:29.164 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:29.163+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:29.192 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:29.192+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:29.192 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:29.192+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:29.376 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:36:29.375+0000 7f4e00c49640 -1 mon.a@0(probing) e1 get_health_metrics reporting 10 slow ops, oldest is monmgrreport(gid 4106, 2 checks, 0 progress events) 2026-03-21T15:36:30.147 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:30.146+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:30.147 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:30.146+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:30.233 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:30.232+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:30.233 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:30.232+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:31.098 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:31.097+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:31.098 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:31.097+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:31.228 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:31.227+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:31.228 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:31.227+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:32.051 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:32.051+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:32.051 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:32.051+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:32.243 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:32.242+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:32.243 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:32.242+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:33.038 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:33.038+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:33.038 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:33.038+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:33.242 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:33.242+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:33.242 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:33.242+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:34.065 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:34.064+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:34.065 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:34.064+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:34.218 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:34.218+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:34.219 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:34.218+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:34.376 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:36:34.376+0000 7f4e00c49640 -1 mon.a@0(probing) e1 get_health_metrics reporting 16 slow ops, oldest is monmgrreport(gid 4106, 2 checks, 0 progress events) 2026-03-21T15:36:35.046 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:35.045+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:35.046 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:35.045+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:35.176 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:35.176+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:35.177 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:35.176+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:36.076 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:36.076+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:36.076 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:36.076+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:36.136 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:36.135+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:36.136 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:36.135+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:37.064 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:37.063+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:37.064 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:37.063+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:37.160 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:37.159+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:37.160 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:37.159+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:38.073 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:38.072+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:38.073 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:38.072+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:38.127 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:38.126+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:38.127 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:38.126+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:39.055 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:39.054+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:39.055 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:39.054+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:39.169 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:39.168+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:39.169 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:39.168+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:39.377 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:36:39.376+0000 7f4e00c49640 -1 mon.a@0(probing) e1 get_health_metrics reporting 25 slow ops, oldest is monmgrreport(gid 4106, 2 checks, 0 progress events) 2026-03-21T15:36:40.062 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:40.061+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:40.062 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:40.061+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:40.207 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:40.206+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:40.207 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:40.206+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:41.018 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:41.017+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:41.018 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:41.017+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:41.246 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:41.245+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:41.246 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:41.245+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:41.980 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:41.979+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:41.980 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:41.979+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:42.281 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:42.281+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:42.282 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:42.281+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:42.979 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:42.979+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:42.979 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:42.979+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:43.305 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:43.304+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:43.305 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:43.304+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:43.305 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:43.304+0000 7f91f119e640 -1 osd.1 23 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.4293.0:33280 2.6 2:6ba341aa:::rbd_header.10af391361bd:head [watch ping cookie 139784305420992] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e23) 2026-03-21T15:36:43.974 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:43.974+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:43.975 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:43.974+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:44.344 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:44.343+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:44.344 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:44.343+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:44.344 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:44.343+0000 7f91f119e640 -1 osd.1 23 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.4293.0:33280 2.6 2:6ba341aa:::rbd_header.10af391361bd:head [watch ping cookie 139784305420992] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e23) 2026-03-21T15:36:44.377 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:36:44.376+0000 7f4e00c49640 -1 mon.a@0(probing) e1 get_health_metrics reporting 35 slow ops, oldest is monmgrreport(gid 4106, 2 checks, 0 progress events) 2026-03-21T15:36:44.949 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:44.948+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:44.949 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:44.948+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:45.300 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:45.299+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:45.300 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:45.299+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:45.300 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:45.299+0000 7f91f119e640 -1 osd.1 23 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.4293.0:33280 2.6 2:6ba341aa:::rbd_header.10af391361bd:head [watch ping cookie 139784305420992] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e23) 2026-03-21T15:36:45.936 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:45.935+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.096554+0000 front 2026-03-21T15:35:50.096768+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:45.936 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:45.935+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:46.251 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:46.250+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:35:50.477164+0000 front 2026-03-21T15:35:50.477042+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:46.251 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:46.250+0000 7f91f119e640 -1 osd.1 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.477309+0000 front 2026-03-21T15:35:50.477018+0000 (oldest deadline 2026-03-21T15:36:13.355431+0000) 2026-03-21T15:36:46.251 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:36:46.250+0000 7f91f119e640 -1 osd.1 23 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.4293.0:33280 2.6 2:6ba341aa:::rbd_header.10af391361bd:head [watch ping cookie 139784305420992] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e23) 2026-03-21T15:36:46.351 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:36:46.339+0000 7fc723d93640 -1 osd.2 23 heartbeat_check: no reply from 192.168.123.101:6812 osd.0 since back 2026-03-21T15:35:50.097287+0000 front 2026-03-21T15:35:50.102896+0000 (oldest deadline 2026-03-21T15:36:20.461426+0000) 2026-03-21T15:36:46.351 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:36:46.339+0000 7fc723d93640 -1 osd.2 23 heartbeat_check: no reply from 192.168.123.101:6804 osd.1 since back 2026-03-21T15:35:50.097230+0000 front 2026-03-21T15:35:50.096600+0000 (oldest deadline 2026-03-21T15:36:20.461426+0000) 2026-03-21T15:36:46.351 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:36:46.339+0000 7fc723d93640 -1 osd.2 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.103164+0000 front 2026-03-21T15:35:50.102930+0000 (oldest deadline 2026-03-21T15:36:20.461426+0000) 2026-03-21T15:36:46.931 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:46.931+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:47.964 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:36:47.963+0000 7f753d95a640 -1 osd.0 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.096701+0000 front 2026-03-21T15:35:50.096233+0000 (oldest deadline 2026-03-21T15:36:13.158375+0000) 2026-03-21T15:36:48.722 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:36:47.809+0000 7fc723d93640 -1 osd.2 23 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:35:50.103164+0000 front 2026-03-21T15:35:50.102930+0000 (oldest deadline 2026-03-21T15:36:20.461426+0000) 2026-03-21T15:36:49.747 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:36:49.741+0000 7fc22f92b640 -1 osd.3 24 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T15:36:49.747 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:36:49.742+0000 7fc722590640 -1 osd.2 24 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T15:38:10.085 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:10.084+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:10.085 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:10.084+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:11.078 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:11.077+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:11.078 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:11.077+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:12.127 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:12.126+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:12.127 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:12.126+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:13.105 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:13.104+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:13.105 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:13.104+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:13.466 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:13.465+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:47.172272+0000 front 2026-03-21T15:37:47.172382+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:13.466 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:13.465+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:47.173004+0000 front 2026-03-21T15:37:47.172991+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:14.059 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:14.058+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:14.059 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:14.058+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:14.501 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:14.500+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:47.172272+0000 front 2026-03-21T15:37:47.172382+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:14.501 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:14.500+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:47.173004+0000 front 2026-03-21T15:37:47.172991+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:15.019 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:15.018+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:15.020 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:15.018+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:15.491 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:15.490+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:47.172272+0000 front 2026-03-21T15:37:47.172382+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:15.491 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:15.490+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:47.173004+0000 front 2026-03-21T15:37:47.172991+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:16.063 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:16.062+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:16.063 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:16.062+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:16.534 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:16.533+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:47.172272+0000 front 2026-03-21T15:37:47.172382+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:16.534 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:16.533+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:47.173004+0000 front 2026-03-21T15:37:47.172991+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:17.031 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:17.030+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:17.031 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:17.030+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:17.536 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:17.535+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:47.172272+0000 front 2026-03-21T15:37:47.172382+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:17.536 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:17.535+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:47.173004+0000 front 2026-03-21T15:37:47.172991+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:18.020 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:18.019+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:18.020 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:18.019+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:18.495 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:18.494+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:47.172272+0000 front 2026-03-21T15:37:47.172382+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:18.495 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:18.494+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:47.173004+0000 front 2026-03-21T15:37:47.172991+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:18.974 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:18.973+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:18.974 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:18.973+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:19.465 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:19.463+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:47.172272+0000 front 2026-03-21T15:37:47.172382+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:19.465 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:19.463+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:47.173004+0000 front 2026-03-21T15:37:47.172991+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:19.943 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:19.942+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:19.943 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:19.942+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:20.454 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:20.453+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:47.172272+0000 front 2026-03-21T15:37:47.172382+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:20.454 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:20.453+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:47.173004+0000 front 2026-03-21T15:37:47.172991+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:20.991 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:20.990+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:20.991 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:20.990+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:21.433 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:21.431+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:47.172272+0000 front 2026-03-21T15:37:47.172382+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:21.433 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:21.431+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:47.173004+0000 front 2026-03-21T15:37:47.172991+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:22.020 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:22.019+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:22.020 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:22.019+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:22.445 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:22.444+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:47.172272+0000 front 2026-03-21T15:37:47.172382+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:22.445 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:22.444+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:47.173004+0000 front 2026-03-21T15:37:47.172991+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:22.979 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:22.979+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:22.980 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:22.979+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:23.458 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:23.457+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:47.172272+0000 front 2026-03-21T15:37:47.172382+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:23.458 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:23.457+0000 7f91f119e640 -1 osd.1 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:47.173004+0000 front 2026-03-21T15:37:47.172991+0000 (oldest deadline 2026-03-21T15:38:13.067722+0000) 2026-03-21T15:38:23.949 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:23.949+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:37:48.986046+0000 front 2026-03-21T15:37:48.986208+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:23.949 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:23.949+0000 7f753d95a640 -1 osd.0 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:48.986180+0000 front 2026-03-21T15:37:48.986224+0000 (oldest deadline 2026-03-21T15:38:09.469028+0000) 2026-03-21T15:38:24.298 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:38:24.286+0000 7fc723d93640 -1 osd.2 26 heartbeat_check: no reply from 192.168.123.101:6812 osd.0 since back 2026-03-21T15:37:49.335780+0000 front 2026-03-21T15:37:49.336452+0000 (oldest deadline 2026-03-21T15:38:15.090985+0000) 2026-03-21T15:38:24.298 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:38:24.286+0000 7fc723d93640 -1 osd.2 26 heartbeat_check: no reply from 192.168.123.101:6804 osd.1 since back 2026-03-21T15:37:49.336710+0000 front 2026-03-21T15:37:49.336717+0000 (oldest deadline 2026-03-21T15:38:15.090985+0000) 2026-03-21T15:38:24.298 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:38:24.286+0000 7fc723d93640 -1 osd.2 26 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:37:49.337057+0000 front 2026-03-21T15:37:49.337027+0000 (oldest deadline 2026-03-21T15:38:15.090985+0000) 2026-03-21T15:38:25.293 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:38:25.285+0000 7fc722590640 -1 osd.2 27 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T15:38:25.293 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:38:25.289+0000 7fc22f92b640 -1 osd.3 27 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T15:38:50.869 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:50.868+0000 7f753d95a640 -1 osd.0 29 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:38:27.463526+0000 front 2026-03-21T15:38:27.463554+0000 (oldest deadline 2026-03-21T15:38:50.272892+0000) 2026-03-21T15:38:50.869 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:50.868+0000 7f753d95a640 -1 osd.0 29 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:38:27.466708+0000 front 2026-03-21T15:38:27.467149+0000 (oldest deadline 2026-03-21T15:38:50.272892+0000) 2026-03-21T15:38:51.859 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:51.857+0000 7f753d95a640 -1 osd.0 29 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:38:27.463526+0000 front 2026-03-21T15:38:27.463554+0000 (oldest deadline 2026-03-21T15:38:50.272892+0000) 2026-03-21T15:38:51.859 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:51.857+0000 7f753d95a640 -1 osd.0 29 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:38:27.466708+0000 front 2026-03-21T15:38:27.467149+0000 (oldest deadline 2026-03-21T15:38:50.272892+0000) 2026-03-21T15:38:52.833 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:52.832+0000 7f753d95a640 -1 osd.0 29 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:38:27.466708+0000 front 2026-03-21T15:38:27.467149+0000 (oldest deadline 2026-03-21T15:38:50.272892+0000) 2026-03-21T15:38:53.802 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:53.801+0000 7f753d95a640 -1 osd.0 29 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:38:27.466708+0000 front 2026-03-21T15:38:27.467149+0000 (oldest deadline 2026-03-21T15:38:50.272892+0000) 2026-03-21T15:38:54.766 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:54.765+0000 7f753d95a640 -1 osd.0 29 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:38:27.466708+0000 front 2026-03-21T15:38:27.467149+0000 (oldest deadline 2026-03-21T15:38:50.272892+0000) 2026-03-21T15:38:55.467 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:55.466+0000 7f91f119e640 -1 osd.1 29 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:38:29.371533+0000 front 2026-03-21T15:38:29.371581+0000 (oldest deadline 2026-03-21T15:38:54.670558+0000) 2026-03-21T15:38:55.782 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:55.781+0000 7f753d95a640 -1 osd.0 29 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:38:27.466708+0000 front 2026-03-21T15:38:27.467149+0000 (oldest deadline 2026-03-21T15:38:50.272892+0000) 2026-03-21T15:38:56.446 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:38:56.446+0000 7f91f119e640 -1 osd.1 29 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:38:29.371533+0000 front 2026-03-21T15:38:29.371581+0000 (oldest deadline 2026-03-21T15:38:54.670558+0000) 2026-03-21T15:38:56.748 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:38:56.747+0000 7f753d95a640 -1 osd.0 29 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:38:27.466708+0000 front 2026-03-21T15:38:27.467149+0000 (oldest deadline 2026-03-21T15:38:50.272892+0000) 2026-03-21T15:38:57.913 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:38:57.906+0000 7fc22f92b640 -1 osd.3 30 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T15:40:09.341 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:09.340+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:09.341 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:09.340+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:10.388 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:10.387+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:10.388 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:10.387+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:11.431 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:11.430+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:11.431 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:11.430+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:11.940 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:11.939+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:11.941 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:11.939+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:12.457 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:12.455+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:12.457 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:12.455+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:12.908 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:12.907+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:12.908 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:12.907+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:13.451 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:13.450+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:13.451 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:13.450+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:13.952 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:13.949+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:13.952 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:13.949+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:14.477 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:14.475+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:14.477 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:14.475+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:14.963 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:14.962+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:14.964 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:14.962+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:15.514 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:15.512+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:15.514 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:15.512+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:16.000 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:15.999+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:16.000 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:15.999+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:16.478 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:16.477+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:16.479 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:16.477+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:16.996 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:16.995+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:16.996 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:16.995+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:17.528 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:17.527+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:17.528 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:17.527+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:18.022 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:18.021+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:18.022 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:18.021+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:18.535 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:18.533+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:18.535 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:18.533+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:19.058 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:19.056+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:19.058 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:19.056+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:19.405 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:40:19.403+0000 7f4e00c49640 -1 mon.a@0(probing) e1 get_health_metrics reporting 1 slow ops, oldest is log(1 entries from seq 1698 at 2026-03-21T15:39:48.765273+0000) 2026-03-21T15:40:19.569 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:19.568+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:19.569 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:19.568+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:20.082 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:20.081+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:20.082 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:20.081+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:20.560 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:20.559+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:20.560 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:20.559+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:21.077 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:21.075+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:21.077 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:21.075+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:21.513 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:21.511+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:21.513 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:21.511+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:22.099 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:22.098+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:22.099 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:22.098+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:22.472 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:22.470+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:22.472 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:22.470+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:23.123 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:23.122+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:23.123 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:23.122+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:23.467 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:23.466+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:23.467 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:23.466+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:24.140 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:24.139+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:24.141 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:24.139+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:24.405 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:40:24.404+0000 7f4e00c49640 -1 mon.a@0(probing) e1 get_health_metrics reporting 6 slow ops, oldest is log(1 entries from seq 1698 at 2026-03-21T15:39:48.765273+0000) 2026-03-21T15:40:24.451 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:24.449+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709963+0000 front 2026-03-21T15:39:47.709987+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:24.451 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:40:24.449+0000 7f91f119e640 -1 osd.1 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714032+0000 front 2026-03-21T15:39:47.714004+0000 (oldest deadline 2026-03-21T15:40:08.777343+0000) 2026-03-21T15:40:25.112 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:25.111+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:47.709913+0000 front 2026-03-21T15:39:47.709889+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:25.113 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:25.111+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:47.714003+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:25.404 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:40:25.386+0000 7fc23112e640 -1 osd.3 32 heartbeat_check: no reply from 192.168.123.101:6812 osd.0 since back 2026-03-21T15:39:45.539525+0000 front 2026-03-21T15:39:45.537756+0000 (oldest deadline 2026-03-21T15:40:19.811858+0000) 2026-03-21T15:40:25.404 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:40:25.386+0000 7fc23112e640 -1 osd.3 32 heartbeat_check: no reply from 192.168.123.101:6804 osd.1 since back 2026-03-21T15:39:45.539339+0000 front 2026-03-21T15:39:45.547896+0000 (oldest deadline 2026-03-21T15:40:19.811858+0000) 2026-03-21T15:40:25.404 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:40:25.386+0000 7fc23112e640 -1 osd.3 32 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:39:45.537796+0000 front 2026-03-21T15:39:45.539352+0000 (oldest deadline 2026-03-21T15:40:19.811858+0000) 2026-03-21T15:40:25.404 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:40:25.390+0000 7fc723d93640 -1 osd.2 32 heartbeat_check: no reply from 192.168.123.101:6812 osd.0 since back 2026-03-21T15:39:45.532060+0000 front 2026-03-21T15:40:25.390215+0000 (oldest deadline 2026-03-21T15:40:16.214168+0000) 2026-03-21T15:40:25.405 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:40:25.390+0000 7fc723d93640 -1 osd.2 32 heartbeat_check: no reply from 192.168.123.101:6804 osd.1 since back 2026-03-21T15:39:45.532081+0000 front 2026-03-21T15:40:25.390244+0000 (oldest deadline 2026-03-21T15:40:16.214168+0000) 2026-03-21T15:40:25.405 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:40:25.390+0000 7fc723d93640 -1 osd.2 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:39:45.547147+0000 front 2026-03-21T15:39:45.539247+0000 (oldest deadline 2026-03-21T15:40:16.214168+0000) 2026-03-21T15:40:25.430 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:40:25.424+0000 7fc23112e640 -1 osd.3 32 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.4293.0:34022 2.0 2:00aa88f2:::rbd_data.10af391361bd.000000000000083d:head [write 172032~524288 in=524288b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e32) 2026-03-21T15:40:26.129 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:40:26.128+0000 7f753d95a640 -1 osd.0 32 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:25.414315+0000 front 2026-03-21T15:39:47.710001+0000 (oldest deadline 2026-03-21T15:40:11.079267+0000) 2026-03-21T15:40:27.359 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:40:27.353+0000 7fc22f92b640 -1 osd.3 33 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T15:40:27.435 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:40:27.433+0000 7fc722590640 -1 osd.2 33 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T15:41:02.509 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:02.508+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:02.509 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:02.508+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:03.512 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:03.511+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:03.512 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:03.511+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:04.477 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:04.476+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:04.477 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:04.476+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:05.277 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:05.276+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:39.608922+0000 front 2026-03-21T15:40:39.608875+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:05.277 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:05.276+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:05.472 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:05.471+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:05.472 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:05.471+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:06.274 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:06.273+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:39.608922+0000 front 2026-03-21T15:40:39.608875+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:06.275 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:06.273+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:06.461 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:06.460+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:06.462 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:06.460+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:07.230 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:07.229+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:39.608922+0000 front 2026-03-21T15:40:39.608875+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:07.230 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:07.229+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:07.474 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:07.473+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:07.474 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:07.473+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:08.244 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:08.242+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:39.608922+0000 front 2026-03-21T15:40:39.608875+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:08.244 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:08.242+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:08.433 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:08.432+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:08.433 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:08.432+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:09.245 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:09.244+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:39.608922+0000 front 2026-03-21T15:40:39.608875+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:09.245 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:09.244+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:09.387 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:09.386+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:09.387 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:09.386+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:10.296 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:10.294+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:39.608922+0000 front 2026-03-21T15:40:39.608875+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:10.296 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:10.294+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:10.396 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:10.395+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:10.396 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:10.395+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:11.301 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:11.300+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:39.608922+0000 front 2026-03-21T15:40:39.608875+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:11.301 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:11.300+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:11.407 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:11.406+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:11.407 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:11.406+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:12.335 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:12.334+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:39.608922+0000 front 2026-03-21T15:40:39.608875+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:12.335 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:12.334+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:12.410 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:12.409+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:12.410 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:12.409+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:13.371 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:13.369+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:13.371 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:13.369+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:13.376 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:13.374+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:39.608922+0000 front 2026-03-21T15:40:39.608875+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:13.376 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:13.374+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:14.362 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:14.361+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:39.608922+0000 front 2026-03-21T15:40:39.608875+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:14.362 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:14.361+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:14.369 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:14.367+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:14.369 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:14.367+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:15.352 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:15.351+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:39.608922+0000 front 2026-03-21T15:40:39.608875+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:15.353 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:15.351+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:15.378 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:15.377+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6812 osd.2 since back 2026-03-21T15:40:41.783046+0000 front 2026-03-21T15:40:41.783012+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:15.378 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:15.377+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:15.581 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:41:15.572+0000 7fc723d93640 -1 osd.2 35 heartbeat_check: no reply from 192.168.123.101:6812 osd.0 since back 2026-03-21T15:40:41.432714+0000 front 2026-03-21T15:40:41.432632+0000 (oldest deadline 2026-03-21T15:41:08.590909+0000) 2026-03-21T15:41:15.581 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:41:15.572+0000 7fc723d93640 -1 osd.2 35 heartbeat_check: no reply from 192.168.123.101:6804 osd.1 since back 2026-03-21T15:40:41.432602+0000 front 2026-03-21T15:40:41.432655+0000 (oldest deadline 2026-03-21T15:41:08.590909+0000) 2026-03-21T15:41:15.581 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:41:15.572+0000 7fc723d93640 -1 osd.2 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.432529+0000 front 2026-03-21T15:40:41.432572+0000 (oldest deadline 2026-03-21T15:41:08.590909+0000) 2026-03-21T15:41:16.314 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:16.313+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:16.405 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:16.404+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:17.331 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:17.330+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:17.358 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:17.357+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:18.349 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:18.348+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:18.390 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:18.389+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:19.335 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:19.334+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:19.432 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:19.431+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:20.294 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:41:20.293+0000 7f753d95a640 -1 osd.0 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:39.608793+0000 front 2026-03-21T15:40:39.608885+0000 (oldest deadline 2026-03-21T15:41:04.584494+0000) 2026-03-21T15:41:20.388 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:41:20.386+0000 7f91f119e640 -1 osd.1 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.782977+0000 front 2026-03-21T15:40:41.783107+0000 (oldest deadline 2026-03-21T15:41:02.282750+0000) 2026-03-21T15:41:20.509 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:41:16.848+0000 7fc723d93640 -1 osd.2 35 heartbeat_check: no reply from 192.168.123.105:6804 osd.3 since back 2026-03-21T15:40:41.432529+0000 front 2026-03-21T15:40:41.432572+0000 (oldest deadline 2026-03-21T15:41:08.590909+0000) 2026-03-21T15:41:20.559 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:41:20.550+0000 7fc2276f2640 -1 osd.3 36 _committed_osd_maps marked down 6 > osd_max_markdown_count 5 in last 600.000000 seconds, shutting down 2026-03-21T15:41:20.567 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:41:20.564+0000 7fc235b39640 -1 received signal: Interrupt from Kernel ( Could be generated by pthread_kill(), raise(), abort(), alarm() ) UID: 0 2026-03-21T15:41:20.567 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:41:20.564+0000 7fc235b39640 -1 osd.3 36 *** Got signal Interrupt *** 2026-03-21T15:41:20.567 INFO:tasks.ceph.osd.3.vm05.stderr:2026-03-21T15:41:20.564+0000 7fc235b39640 -1 osd.3 36 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-21T15:41:21.404 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:41:21.402+0000 7fc722590640 -1 osd.2 36 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T15:41:23.546 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~0s 2026-03-21T15:41:29.251 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~6s 2026-03-21T15:41:34.958 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~11s 2026-03-21T15:41:40.663 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~17s 2026-03-21T15:41:46.366 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~23s 2026-03-21T15:41:52.071 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~29s 2026-03-21T15:41:57.775 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~34s 2026-03-21T15:42:03.478 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~40s 2026-03-21T15:42:09.182 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~46s 2026-03-21T15:42:14.884 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~51s 2026-03-21T15:42:20.589 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~57s 2026-03-21T15:42:26.293 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~63s 2026-03-21T15:42:31.998 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~68s 2026-03-21T15:42:37.703 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~74s 2026-03-21T15:42:43.407 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~80s 2026-03-21T15:42:49.110 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~86s 2026-03-21T15:42:54.815 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~91s 2026-03-21T15:43:00.519 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~97s 2026-03-21T15:43:06.225 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~103s 2026-03-21T15:43:11.928 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~108s 2026-03-21T15:43:17.631 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~114s 2026-03-21T15:43:23.335 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~120s 2026-03-21T15:43:29.039 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~125s 2026-03-21T15:43:34.743 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~131s 2026-03-21T15:43:36.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1048.152034] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:43:36.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1048.155560] Stack: 2026-03-21T15:43:36.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1048.156031] Call Trace: 2026-03-21T15:43:36.574 INFO:tasks.qemu.client.0.vm05.stdout:[ 1048.156031] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:43:40.446 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~137s 2026-03-21T15:43:46.149 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~143s 2026-03-21T15:43:51.852 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~148s 2026-03-21T15:43:57.555 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~154s 2026-03-21T15:44:03.259 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~160s 2026-03-21T15:44:04.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1076.152030] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:44:04.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1076.154443] Stack: 2026-03-21T15:44:04.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1076.155201] Call Trace: 2026-03-21T15:44:04.572 INFO:tasks.qemu.client.0.vm05.stdout:[ 1076.156027] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:44:08.963 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~165s 2026-03-21T15:44:10.100 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.692025] INFO: rcu_sched detected stall on CPU 1 (t=15000 jiffies) 2026-03-21T15:44:10.101 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.692025] Stack: 2026-03-21T15:44:10.102 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.692025] Call Trace: 2026-03-21T15:44:10.102 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.692025] 2026-03-21T15:44:10.103 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.692025] 2026-03-21T15:44:10.114 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.692025] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T15:44:10.131 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.696092] Stack: 2026-03-21T15:44:10.133 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.696092] Call Trace: 2026-03-21T15:44:10.158 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.696092] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:44:10.161 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.753136] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T15:44:10.161 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.712047] Stack: 2026-03-21T15:44:10.162 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.712047] Call Trace: 2026-03-21T15:44:10.169 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.712047] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:44:10.170 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.712046] Stack: 2026-03-21T15:44:10.171 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.712046] Call Trace: 2026-03-21T15:44:10.177 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.712046] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:44:10.178 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.757134] Stack: 2026-03-21T15:44:10.179 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.757134] Call Trace: 2026-03-21T15:44:10.180 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.757134] 2026-03-21T15:44:10.181 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.757134] 2026-03-21T15:44:10.193 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.757134] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T15:44:10.195 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.787510] Stack: 2026-03-21T15:44:10.195 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.788033] Call Trace: 2026-03-21T15:44:10.205 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.788033] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:44:10.206 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.788046] Stack: 2026-03-21T15:44:10.207 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.788046] Call Trace: 2026-03-21T15:44:10.222 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.788046] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:44:10.223 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.788036] Stack: 2026-03-21T15:44:10.224 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.788036] Call Trace: 2026-03-21T15:44:10.236 INFO:tasks.qemu.client.0.vm05.stdout:[ 1081.788036] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:44:14.667 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~171s 2026-03-21T15:44:20.370 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~177s 2026-03-21T15:44:26.075 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~183s 2026-03-21T15:44:31.778 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~188s 2026-03-21T15:44:36.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1108.152028] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:44:36.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1108.154538] Stack: 2026-03-21T15:44:36.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1108.155375] Call Trace: 2026-03-21T15:44:36.573 INFO:tasks.qemu.client.0.vm05.stdout:[ 1108.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:44:37.482 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~194s 2026-03-21T15:44:43.186 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~200s 2026-03-21T15:44:48.890 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~205s 2026-03-21T15:44:54.593 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~211s 2026-03-21T15:45:00.297 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~217s 2026-03-21T15:45:04.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1136.152031] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:45:04.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1136.154450] Stack: 2026-03-21T15:45:04.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1136.155226] Call Trace: 2026-03-21T15:45:04.572 INFO:tasks.qemu.client.0.vm05.stdout:[ 1136.156028] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:45:06.001 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~222s 2026-03-21T15:45:11.706 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~228s 2026-03-21T15:45:17.408 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~234s 2026-03-21T15:45:23.111 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~240s 2026-03-21T15:45:28.815 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~245s 2026-03-21T15:45:32.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1164.152038] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:45:32.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1164.154570] Stack: 2026-03-21T15:45:32.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1164.155397] Call Trace: 2026-03-21T15:45:32.573 INFO:tasks.qemu.client.0.vm05.stdout:[ 1164.156034] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:45:34.518 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~251s 2026-03-21T15:45:40.221 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~257s 2026-03-21T15:45:45.925 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~262s 2026-03-21T15:45:51.629 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~268s 2026-03-21T15:45:57.334 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~274s 2026-03-21T15:46:00.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1192.152027] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:46:00.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1192.154497] Stack: 2026-03-21T15:46:00.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1192.155286] Call Trace: 2026-03-21T15:46:00.572 INFO:tasks.qemu.client.0.vm05.stdout:[ 1192.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:46:03.038 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~279s 2026-03-21T15:46:08.741 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~285s 2026-03-21T15:46:09.149 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.740119] INFO: task fsstress:3708 blocked for more than 120 seconds. 2026-03-21T15:46:09.152 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.743205] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 2026-03-21T15:46:09.155 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.746456] INFO: task fsstress:3709 blocked for more than 120 seconds. 2026-03-21T15:46:09.159 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.749443] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 2026-03-21T15:46:09.162 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.753443] INFO: task fsstress:3710 blocked for more than 120 seconds. 2026-03-21T15:46:09.165 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.756372] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 2026-03-21T15:46:09.168 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.759650] INFO: task fsstress:3711 blocked for more than 120 seconds. 2026-03-21T15:46:09.171 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.762648] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 2026-03-21T15:46:09.174 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.766216] INFO: task fsstress:3712 blocked for more than 120 seconds. 2026-03-21T15:46:09.180 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.769233] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 2026-03-21T15:46:09.199 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.774792] INFO: task fsstress:3713 blocked for more than 120 seconds. 2026-03-21T15:46:09.202 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.793903] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 2026-03-21T15:46:09.204 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.796778] INFO: task fsstress:3714 blocked for more than 120 seconds. 2026-03-21T15:46:09.208 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.799033] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 2026-03-21T15:46:09.210 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.802374] INFO: task fsstress:3715 blocked for more than 120 seconds. 2026-03-21T15:46:09.213 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.804657] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 2026-03-21T15:46:09.215 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.807537] INFO: task fsstress:3716 blocked for more than 120 seconds. 2026-03-21T15:46:09.218 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.809868] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 2026-03-21T15:46:09.220 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.812710] INFO: task fsstress:3717 blocked for more than 120 seconds. 2026-03-21T15:46:09.223 INFO:tasks.qemu.client.0.vm05.stdout:[ 1200.814951] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. 2026-03-21T15:46:14.445 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~291s 2026-03-21T15:46:20.149 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~297s 2026-03-21T15:46:25.852 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.osd.3 is failed for ~302s 2026-03-21T15:46:25.852 INFO:tasks.daemonwatchdog.daemon_watchdog:BARK! unmounting mounts and killing all daemons 2026-03-21T15:46:26.555 INFO:tasks.ceph.osd.0:Sent signal 15 2026-03-21T15:46:26.555 INFO:tasks.ceph.osd.1:Sent signal 15 2026-03-21T15:46:26.555 INFO:tasks.ceph.osd.2:Sent signal 15 2026-03-21T15:46:26.556 INFO:tasks.ceph.mon.a:Sent signal 15 2026-03-21T15:46:26.556 INFO:tasks.ceph.mon.b:Sent signal 15 2026-03-21T15:46:26.556 INFO:tasks.ceph.mgr.x:Sent signal 15 2026-03-21T15:46:26.556 INFO:tasks.ceph.mgr.y:Sent signal 15 2026-03-21T15:46:26.557 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:46:26.555+0000 7f4e03c4f640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-mon -f --cluster ceph -i a (PID: 52822) UID: 0 2026-03-21T15:46:26.557 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T15:46:26.555+0000 7f4e03c4f640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-21T15:46:26.557 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:46:26.555+0000 7f7542365640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 0 (PID: 53113) UID: 0 2026-03-21T15:46:26.557 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:46:26.555+0000 7f7542365640 -1 osd.0 38 *** Got signal Terminated *** 2026-03-21T15:46:26.557 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T15:46:26.555+0000 7f7542365640 -1 osd.0 38 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-21T15:46:26.557 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:46:26.555+0000 7f91f5ba9640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 1 (PID: 53114) UID: 0 2026-03-21T15:46:26.557 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:46:26.555+0000 7f91f5ba9640 -1 osd.1 38 *** Got signal Terminated *** 2026-03-21T15:46:26.557 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T15:46:26.555+0000 7f91f5ba9640 -1 osd.1 38 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-21T15:46:26.575 INFO:tasks.ceph.mon.b.vm05.stderr:2026-03-21T15:46:26.573+0000 7f7b730e5640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-mon -f --cluster ceph -i b (PID: 52582) UID: 0 2026-03-21T15:46:26.575 INFO:tasks.ceph.mon.b.vm05.stderr:2026-03-21T15:46:26.573+0000 7f7b730e5640 -1 mon.b@1(peon) e1 *** Got Signal Terminated *** 2026-03-21T15:46:26.575 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:46:26.573+0000 7fc72879e640 -1 received signal: Terminated from /usr/bin/python3 /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 2 (PID: 52943) UID: 0 2026-03-21T15:46:26.575 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:46:26.573+0000 7fc72879e640 -1 osd.2 38 *** Got signal Terminated *** 2026-03-21T15:46:26.575 INFO:tasks.ceph.osd.2.vm05.stderr:2026-03-21T15:46:26.573+0000 7fc72879e640 -1 osd.2 38 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-21T15:46:26.766 INFO:tasks.ceph.mgr.y.vm05.stderr:daemon-helper: command crashed with signal 15 2026-03-21T15:46:28.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1220.152039] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:46:28.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1220.154981] Stack: 2026-03-21T15:46:28.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1220.155892] Call Trace: 2026-03-21T15:46:28.575 INFO:tasks.qemu.client.0.vm05.stdout:[ 1220.156035] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:46:56.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1248.152039] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:46:56.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1248.154476] Stack: 2026-03-21T15:46:56.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1248.155266] Call Trace: 2026-03-21T15:46:56.576 INFO:tasks.qemu.client.0.vm05.stdout:[ 1248.156034] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:47:10.285 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.876025] INFO: rcu_sched detected stall on CPU 1 (t=60046 jiffies) 2026-03-21T15:47:10.286 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.876025] Stack: 2026-03-21T15:47:10.286 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.876025] Call Trace: 2026-03-21T15:47:10.287 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.876025] 2026-03-21T15:47:10.288 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.876025] 2026-03-21T15:47:10.298 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.876025] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T15:47:10.319 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.880379] Stack: 2026-03-21T15:47:10.321 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.880379] Call Trace: 2026-03-21T15:47:10.331 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.880379] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:47:10.333 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.925165] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T15:47:10.336 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.896195] Stack: 2026-03-21T15:47:10.336 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.896195] Call Trace: 2026-03-21T15:47:10.342 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.896195] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:47:10.342 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.896077] Stack: 2026-03-21T15:47:10.343 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.896077] Call Trace: 2026-03-21T15:47:10.350 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.896077] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:47:10.351 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.929161] Stack: 2026-03-21T15:47:10.352 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.929161] Call Trace: 2026-03-21T15:47:10.353 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.929161] 2026-03-21T15:47:10.354 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.929161] 2026-03-21T15:47:10.365 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.929161] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T15:47:10.365 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.959253] Stack: 2026-03-21T15:47:10.366 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.960023] Call Trace: 2026-03-21T15:47:10.377 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.960023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:47:10.385 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.960048] Stack: 2026-03-21T15:47:10.389 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.960048] Call Trace: 2026-03-21T15:47:10.399 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.960048] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:47:10.400 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.960099] Stack: 2026-03-21T15:47:10.401 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.960099] Call Trace: 2026-03-21T15:47:10.415 INFO:tasks.qemu.client.0.vm05.stdout:[ 1261.960099] Code: 65 ff 04 25 b8 c4 00 00 75 09 65 48 8b 24 25 c0 c4 00 00 56 e8 b2 38 9d ff e9 4b 74 ff ff 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 <68> 10 ff ff ff 48 83 ec 58 fc 48 89 7c 24 50 48 89 74 24 48 48 2026-03-21T15:47:36.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1288.152028] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:47:36.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1288.154398] Stack: 2026-03-21T15:47:36.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1288.155151] Call Trace: 2026-03-21T15:47:36.572 INFO:tasks.qemu.client.0.vm05.stdout:[ 1288.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:48:04.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1316.152028] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:48:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1316.154731] Stack: 2026-03-21T15:48:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1316.155565] Call Trace: 2026-03-21T15:48:04.573 INFO:tasks.qemu.client.0.vm05.stdout:[ 1316.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:48:32.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1344.152026] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:48:32.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1344.154451] Stack: 2026-03-21T15:48:32.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1344.155228] Call Trace: 2026-03-21T15:48:32.572 INFO:tasks.qemu.client.0.vm05.stdout:[ 1344.156022] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:49:00.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1372.152025] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:49:00.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1372.154434] Stack: 2026-03-21T15:49:00.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1372.155200] Call Trace: 2026-03-21T15:49:00.573 INFO:tasks.qemu.client.0.vm05.stdout:[ 1372.156022] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:49:28.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1400.152026] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:49:28.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1400.155181] Stack: 2026-03-21T15:49:28.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1400.156023] Call Trace: 2026-03-21T15:49:28.575 INFO:tasks.qemu.client.0.vm05.stdout:[ 1400.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:49:56.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1428.152023] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:49:56.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1428.155183] Stack: 2026-03-21T15:49:56.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1428.156021] Call Trace: 2026-03-21T15:49:56.576 INFO:tasks.qemu.client.0.vm05.stdout:[ 1428.156021] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:50:10.457 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.048024] INFO: rcu_sched detected stall on CPU 1 (t=105089 jiffies) 2026-03-21T15:50:10.458 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.048028] Stack: 2026-03-21T15:50:10.459 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.048028] Call Trace: 2026-03-21T15:50:10.460 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.048028] 2026-03-21T15:50:10.461 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.048028] 2026-03-21T15:50:10.474 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.048028] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T15:50:10.475 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.052039] Stack: 2026-03-21T15:50:10.477 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.052039] Call Trace: 2026-03-21T15:50:10.489 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.052039] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:50:10.492 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.083662] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T15:50:10.493 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.072040] Stack: 2026-03-21T15:50:10.494 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.072040] Call Trace: 2026-03-21T15:50:10.503 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.072040] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:50:10.504 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.072025] Stack: 2026-03-21T15:50:10.506 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.072025] Call Trace: 2026-03-21T15:50:10.519 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.072025] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:50:10.520 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.087661] Stack: 2026-03-21T15:50:10.521 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.087661] Call Trace: 2026-03-21T15:50:10.522 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.087661] 2026-03-21T15:50:10.523 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.087661] 2026-03-21T15:50:10.536 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.087661] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T15:50:10.537 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.130612] Stack: 2026-03-21T15:50:10.538 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.131372] Call Trace: 2026-03-21T15:50:10.548 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.132023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:50:10.549 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.132045] Stack: 2026-03-21T15:50:10.550 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.132045] Call Trace: 2026-03-21T15:50:10.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.132045] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:50:10.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.132029] Stack: 2026-03-21T15:50:10.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.132029] Call Trace: 2026-03-21T15:50:10.574 INFO:tasks.qemu.client.0.vm05.stdout:[ 1442.132029] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:50:36.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1468.152025] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:50:36.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1468.154401] Stack: 2026-03-21T15:50:36.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1468.155153] Call Trace: 2026-03-21T15:50:36.572 INFO:tasks.qemu.client.0.vm05.stdout:[ 1468.156022] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:51:04.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1496.152038] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:51:04.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1496.155168] Stack: 2026-03-21T15:51:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1496.155937] Call Trace: 2026-03-21T15:51:04.573 INFO:tasks.qemu.client.0.vm05.stdout:[ 1496.156034] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:51:32.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1524.152030] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:51:32.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1524.154569] Stack: 2026-03-21T15:51:32.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1524.155401] Call Trace: 2026-03-21T15:51:32.573 INFO:tasks.qemu.client.0.vm05.stdout:[ 1524.156027] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:52:00.560 INFO:tasks.qemu.client.0.vm05.stdout:[ 1552.152028] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:52:00.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1552.154522] Stack: 2026-03-21T15:52:00.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1552.155310] Call Trace: 2026-03-21T15:52:00.575 INFO:tasks.qemu.client.0.vm05.stdout:[ 1552.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:52:28.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1580.152031] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:52:28.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1580.155703] Stack: 2026-03-21T15:52:28.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 1580.156028] Call Trace: 2026-03-21T15:52:28.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 1580.156028] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:52:56.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1608.152029] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:52:56.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1608.155266] Stack: 2026-03-21T15:52:56.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1608.156025] Call Trace: 2026-03-21T15:52:56.576 INFO:tasks.qemu.client.0.vm05.stdout:[ 1608.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:53:10.613 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.204025] INFO: rcu_sched detected stall on CPU 1 (t=150128 jiffies) 2026-03-21T15:53:10.614 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.204025] Stack: 2026-03-21T15:53:10.616 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.204025] Call Trace: 2026-03-21T15:53:10.617 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.204025] 2026-03-21T15:53:10.618 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.204025] 2026-03-21T15:53:10.633 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.204025] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T15:53:10.635 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.208056] Stack: 2026-03-21T15:53:10.636 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.208056] Call Trace: 2026-03-21T15:53:10.649 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.208056] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:53:10.662 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.243236] INFO: rcu_sched detected stalls on CPUs/tasks: { 1} (detected by 0, t=150138 jiffies) 2026-03-21T15:53:10.664 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.228099] Stack: 2026-03-21T15:53:10.665 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.228099] Call Trace: 2026-03-21T15:53:10.682 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.228099] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:53:10.683 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.228056] Stack: 2026-03-21T15:53:10.684 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.228056] Call Trace: 2026-03-21T15:53:10.696 INFO:tasks.qemu.client.0.vm05.stdout:[ 1622.228056] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:53:36.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1648.152039] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:53:36.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 1648.156035] Stack: 2026-03-21T15:53:36.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 1648.156035] Call Trace: 2026-03-21T15:53:36.580 INFO:tasks.qemu.client.0.vm05.stdout:[ 1648.156035] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:54:04.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1676.152027] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:54:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1676.155833] Stack: 2026-03-21T15:54:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 1676.156024] Call Trace: 2026-03-21T15:54:04.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 1676.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:54:32.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1704.152026] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:54:32.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1704.155858] Stack: 2026-03-21T15:54:32.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 1704.156023] Call Trace: 2026-03-21T15:54:32.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 1704.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:55:00.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1732.152028] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:55:00.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1732.155596] Stack: 2026-03-21T15:55:00.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 1732.156024] Call Trace: 2026-03-21T15:55:00.580 INFO:tasks.qemu.client.0.vm05.stdout:[ 1732.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:55:28.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1760.152030] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:55:28.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1760.155460] Stack: 2026-03-21T15:55:28.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 1760.156026] Call Trace: 2026-03-21T15:55:28.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 1760.156026] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:55:56.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1788.152026] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:55:56.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1788.155410] Stack: 2026-03-21T15:55:56.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 1788.156022] Call Trace: 2026-03-21T15:55:56.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 1788.156022] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:56:10.774 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.364024] INFO: rcu_sched detected stall on CPU 1 (t=195168 jiffies) 2026-03-21T15:56:10.775 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.364024] Stack: 2026-03-21T15:56:10.776 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.364024] Call Trace: 2026-03-21T15:56:10.777 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.364024] 2026-03-21T15:56:10.778 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.364024] 2026-03-21T15:56:10.792 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.364024] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T15:56:10.793 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.368057] Stack: 2026-03-21T15:56:10.795 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.368057] Call Trace: 2026-03-21T15:56:10.809 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.368057] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:56:10.812 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.402933] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T15:56:10.813 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.388040] Stack: 2026-03-21T15:56:10.814 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.388040] Call Trace: 2026-03-21T15:56:10.824 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.388040] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:56:10.825 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.388083] Stack: 2026-03-21T15:56:10.827 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.388083] Call Trace: 2026-03-21T15:56:10.837 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.388083] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:56:10.838 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.406930] Stack: 2026-03-21T15:56:10.840 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.406930] Call Trace: 2026-03-21T15:56:10.841 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.406930] 2026-03-21T15:56:10.843 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.406930] 2026-03-21T15:56:10.857 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.406930] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T15:56:10.858 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.451208] Stack: 2026-03-21T15:56:10.860 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.452031] Call Trace: 2026-03-21T15:56:10.875 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.452031] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:56:10.886 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.452082] Stack: 2026-03-21T15:56:10.888 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.452082] Call Trace: 2026-03-21T15:56:10.904 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.452082] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:56:10.905 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.452036] Stack: 2026-03-21T15:56:10.907 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.452036] Call Trace: 2026-03-21T15:56:10.922 INFO:tasks.qemu.client.0.vm05.stdout:[ 1802.452036] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:56:36.561 INFO:tasks.qemu.client.0.vm05.stdout:[ 1828.152025] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:56:36.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1828.155325] Stack: 2026-03-21T15:56:36.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 1828.156022] Call Trace: 2026-03-21T15:56:36.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 1828.156022] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:57:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1856.152045] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:57:04.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 1856.156038] Stack: 2026-03-21T15:57:04.567 INFO:tasks.qemu.client.0.vm05.stdout:[ 1856.156038] Call Trace: 2026-03-21T15:57:04.581 INFO:tasks.qemu.client.0.vm05.stdout:[ 1856.156038] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:57:32.569 INFO:tasks.qemu.client.0.vm05.stdout:[ 1884.152031] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:57:32.569 INFO:tasks.qemu.client.0.vm05.stdout:[ 1884.155703] Stack: 2026-03-21T15:57:32.570 INFO:tasks.qemu.client.0.vm05.stdout:[ 1884.156024] Call Trace: 2026-03-21T15:57:32.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 1884.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:58:00.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1912.152031] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:58:00.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1912.155555] Stack: 2026-03-21T15:58:00.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 1912.156024] Call Trace: 2026-03-21T15:58:00.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 1912.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:58:28.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1940.152053] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:58:28.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 1940.156020] Stack: 2026-03-21T15:58:28.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 1940.156046] Call Trace: 2026-03-21T15:58:28.580 INFO:tasks.qemu.client.0.vm05.stdout:[ 1940.156046] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:58:56.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 1968.152030] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T15:58:56.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 1968.155369] Stack: 2026-03-21T15:58:56.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 1968.156026] Call Trace: 2026-03-21T15:58:56.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 1968.156026] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:59:10.930 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.520031] INFO: rcu_sched detected stall on CPU 1 (t=240208 jiffies) 2026-03-21T15:59:10.931 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.520031] Stack: 2026-03-21T15:59:10.933 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.520031] Call Trace: 2026-03-21T15:59:10.934 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.520031] 2026-03-21T15:59:10.935 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.520031] 2026-03-21T15:59:10.949 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.520031] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T15:59:10.951 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.524035] Stack: 2026-03-21T15:59:10.953 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.524035] Call Trace: 2026-03-21T15:59:10.967 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.524035] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:59:10.971 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.561153] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T15:59:10.972 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.544050] Stack: 2026-03-21T15:59:10.973 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.544050] Call Trace: 2026-03-21T15:59:10.983 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.544050] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:59:10.984 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.544036] Stack: 2026-03-21T15:59:10.985 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.544036] Call Trace: 2026-03-21T15:59:10.995 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.544036] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:59:10.996 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.565149] Stack: 2026-03-21T15:59:10.997 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.565149] Call Trace: 2026-03-21T15:59:10.999 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.565149] 2026-03-21T15:59:11.000 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.565149] 2026-03-21T15:59:11.014 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.565149] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T15:59:11.016 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.608093] Stack: 2026-03-21T15:59:11.017 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.609235] Call Trace: 2026-03-21T15:59:11.032 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.610576] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T15:59:11.034 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.612028] Stack: 2026-03-21T15:59:11.035 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.612028] Call Trace: 2026-03-21T15:59:11.051 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.612028] Code: 41 8d b6 cf 00 00 00 49 8d 7d 18 ff 90 d0 00 00 00 49 83 bc 24 98 90 e0 81 00 0f 84 74 ff ff ff 66 0f 1f 84 00 00 00 00 00 f3 90 <49> 83 7d 18 00 75 f7 e9 5d ff ff ff 66 90 55 48 89 e5 66 66 66 2026-03-21T15:59:11.052 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.612045] Stack: 2026-03-21T15:59:11.054 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.612045] Call Trace: 2026-03-21T15:59:11.069 INFO:tasks.qemu.client.0.vm05.stdout:[ 1982.612045] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T15:59:36.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2008.152033] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T15:59:36.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2008.156028] Stack: 2026-03-21T15:59:36.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 2008.156028] Call Trace: 2026-03-21T15:59:36.580 INFO:tasks.qemu.client.0.vm05.stdout:[ 2008.156028] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:00:04.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2036.152036] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:00:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2036.155295] Stack: 2026-03-21T16:00:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2036.156033] Call Trace: 2026-03-21T16:00:04.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 2036.156033] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:00:32.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2064.152039] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:00:32.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2064.155417] Stack: 2026-03-21T16:00:32.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2064.156029] Call Trace: 2026-03-21T16:00:32.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 2064.156029] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:01:00.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2092.152028] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:01:00.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2092.155459] Stack: 2026-03-21T16:01:00.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2092.156024] Call Trace: 2026-03-21T16:01:00.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 2092.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:01:28.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2120.152029] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:01:28.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2120.155396] Stack: 2026-03-21T16:01:28.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2120.156024] Call Trace: 2026-03-21T16:01:28.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 2120.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:01:56.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2148.152032] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:01:56.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2148.155436] Stack: 2026-03-21T16:01:56.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2148.156028] Call Trace: 2026-03-21T16:01:56.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 2148.156028] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:02:11.094 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.684025] INFO: rcu_sched detected stall on CPU 1 (t=285248 jiffies) 2026-03-21T16:02:11.095 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.684025] Stack: 2026-03-21T16:02:11.097 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.684025] Call Trace: 2026-03-21T16:02:11.098 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.684025] 2026-03-21T16:02:11.099 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.684025] 2026-03-21T16:02:11.112 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.684025] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:02:11.113 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.688062] Stack: 2026-03-21T16:02:11.114 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.688062] Call Trace: 2026-03-21T16:02:11.126 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.688062] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:02:11.129 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.719923] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T16:02:11.130 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.708050] Stack: 2026-03-21T16:02:11.131 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.708050] Call Trace: 2026-03-21T16:02:11.140 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.708050] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:02:11.141 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.708043] Stack: 2026-03-21T16:02:11.142 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.708043] Call Trace: 2026-03-21T16:02:11.151 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.708043] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:02:11.152 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.723922] Stack: 2026-03-21T16:02:11.153 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.723922] Call Trace: 2026-03-21T16:02:11.155 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.723922] 2026-03-21T16:02:11.156 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.723922] 2026-03-21T16:02:11.168 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.723922] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:02:11.169 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.761720] Stack: 2026-03-21T16:02:11.171 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.762842] Call Trace: 2026-03-21T16:02:11.184 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.764028] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:02:11.185 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.764050] Stack: 2026-03-21T16:02:11.187 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.764050] Call Trace: 2026-03-21T16:02:11.201 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.764050] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:02:11.203 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.764032] Stack: 2026-03-21T16:02:11.204 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.764032] Call Trace: 2026-03-21T16:02:11.220 INFO:tasks.qemu.client.0.vm05.stdout:[ 2162.764032] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:02:36.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2188.152030] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:02:36.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2188.155324] Stack: 2026-03-21T16:02:36.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2188.156026] Call Trace: 2026-03-21T16:02:36.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 2188.156026] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:03:04.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2216.152027] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:03:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2216.155355] Stack: 2026-03-21T16:03:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2216.156022] Call Trace: 2026-03-21T16:03:04.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 2216.156022] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:03:32.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2244.152026] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:03:32.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2244.155474] Stack: 2026-03-21T16:03:32.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2244.156023] Call Trace: 2026-03-21T16:03:32.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 2244.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:04:00.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2272.152046] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:04:00.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2272.155592] Stack: 2026-03-21T16:04:00.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2272.156040] Call Trace: 2026-03-21T16:04:00.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 2272.156040] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:04:28.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2300.152031] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:04:28.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2300.155803] Stack: 2026-03-21T16:04:28.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2300.156026] Call Trace: 2026-03-21T16:04:28.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 2300.156026] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:04:56.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2328.152033] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:04:56.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2328.155780] Stack: 2026-03-21T16:04:56.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2328.156025] Call Trace: 2026-03-21T16:04:56.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 2328.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:05:11.251 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.840024] INFO: rcu_sched detected stall on CPU 1 (t=330287 jiffies) 2026-03-21T16:05:11.253 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.840024] Stack: 2026-03-21T16:05:11.254 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.840024] Call Trace: 2026-03-21T16:05:11.255 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.840024] 2026-03-21T16:05:11.256 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.840024] 2026-03-21T16:05:11.271 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.840024] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:05:11.290 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.844207] Stack: 2026-03-21T16:05:11.290 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.844207] Call Trace: 2026-03-21T16:05:11.309 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.844207] Code: 65 ff 04 25 b8 c4 00 00 75 09 65 48 8b 24 25 c0 c4 00 00 56 e8 b2 38 9d ff e9 4b 74 ff ff 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 <68> 10 ff ff ff 48 83 ec 58 fc 48 89 7c 24 50 48 89 74 24 48 48 2026-03-21T16:05:11.313 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.901984] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T16:05:11.314 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.864061] Stack: 2026-03-21T16:05:11.316 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.864061] Call Trace: 2026-03-21T16:05:11.327 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.864061] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:05:11.328 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.864115] Stack: 2026-03-21T16:05:11.329 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.864115] Call Trace: 2026-03-21T16:05:11.340 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.864115] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:05:11.341 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.905978] Stack: 2026-03-21T16:05:11.342 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.905978] Call Trace: 2026-03-21T16:05:11.347 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.905978] 2026-03-21T16:05:11.347 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.905978] 2026-03-21T16:05:11.362 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.905978] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:05:11.363 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.955591] Stack: 2026-03-21T16:05:11.365 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.956024] Call Trace: 2026-03-21T16:05:11.377 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.956024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:05:11.389 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.956058] Stack: 2026-03-21T16:05:11.390 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.956058] Call Trace: 2026-03-21T16:05:11.409 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.956058] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:05:11.411 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.956103] Stack: 2026-03-21T16:05:11.412 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.956103] Call Trace: 2026-03-21T16:05:11.428 INFO:tasks.qemu.client.0.vm05.stdout:[ 2342.956103] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:05:36.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2368.152028] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:05:36.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2368.155456] Stack: 2026-03-21T16:05:36.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2368.156025] Call Trace: 2026-03-21T16:05:36.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 2368.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:06:04.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2396.152028] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:06:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2396.155311] Stack: 2026-03-21T16:06:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2396.156025] Call Trace: 2026-03-21T16:06:04.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 2396.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:06:32.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2424.152026] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:06:32.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2424.155235] Stack: 2026-03-21T16:06:32.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2424.156023] Call Trace: 2026-03-21T16:06:32.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 2424.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:07:00.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2452.152029] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:07:00.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2452.155343] Stack: 2026-03-21T16:07:00.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2452.156025] Call Trace: 2026-03-21T16:07:00.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 2452.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:07:28.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2480.152032] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:07:28.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2480.155331] Stack: 2026-03-21T16:07:28.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2480.156028] Call Trace: 2026-03-21T16:07:28.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 2480.156028] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:07:56.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2508.152028] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:07:56.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2508.155346] Stack: 2026-03-21T16:07:56.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2508.156025] Call Trace: 2026-03-21T16:07:56.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 2508.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:08:11.434 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.024028] INFO: rcu_sched detected stall on CPU 1 (t=375333 jiffies) 2026-03-21T16:08:11.436 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.027462] Stack: 2026-03-21T16:08:11.437 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.028041] Call Trace: 2026-03-21T16:08:11.453 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.028041] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:08:11.457 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.046471] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T16:08:11.458 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.024035] Stack: 2026-03-21T16:08:11.460 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.024035] Call Trace: 2026-03-21T16:08:11.461 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.024035] 2026-03-21T16:08:11.463 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.024035] 2026-03-21T16:08:11.475 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.024035] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:08:11.476 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.046715] Stack: 2026-03-21T16:08:11.477 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.046719] Call Trace: 2026-03-21T16:08:11.489 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.046724] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:08:11.491 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.046787] Stack: 2026-03-21T16:08:11.492 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.046791] Call Trace: 2026-03-21T16:08:11.503 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.046795] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:08:11.505 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.050469] Stack: 2026-03-21T16:08:11.506 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.050469] Call Trace: 2026-03-21T16:08:11.508 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.050469] 2026-03-21T16:08:11.509 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.050469] 2026-03-21T16:08:11.524 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.050469] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:08:11.525 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.116952] Stack: 2026-03-21T16:08:11.526 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.117976] Call Trace: 2026-03-21T16:08:11.539 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.119267] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:08:11.540 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.120061] Stack: 2026-03-21T16:08:11.542 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.120061] Call Trace: 2026-03-21T16:08:11.556 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.120061] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:08:11.557 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.120034] Stack: 2026-03-21T16:08:11.559 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.120034] Call Trace: 2026-03-21T16:08:11.571 INFO:tasks.qemu.client.0.vm05.stdout:[ 2523.120034] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:08:36.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2548.152035] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:08:36.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2548.155612] Stack: 2026-03-21T16:08:36.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2548.156028] Call Trace: 2026-03-21T16:08:36.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 2548.156028] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:09:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2576.152040] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:09:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2576.155931] Stack: 2026-03-21T16:09:04.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2576.156036] Call Trace: 2026-03-21T16:09:04.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 2576.156036] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:09:32.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2604.152041] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:09:32.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2604.155481] Stack: 2026-03-21T16:09:32.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2604.156035] Call Trace: 2026-03-21T16:09:32.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 2604.156035] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:10:00.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2632.152030] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:10:00.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2632.155384] Stack: 2026-03-21T16:10:00.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2632.156025] Call Trace: 2026-03-21T16:10:00.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 2632.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:10:28.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2660.152031] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:10:28.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2660.155360] Stack: 2026-03-21T16:10:28.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2660.156027] Call Trace: 2026-03-21T16:10:28.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 2660.156027] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:10:56.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2688.152033] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:10:56.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2688.155522] Stack: 2026-03-21T16:10:56.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2688.156027] Call Trace: 2026-03-21T16:10:56.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 2688.156027] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:11:11.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.168028] INFO: rcu_sched detected stall on CPU 1 (t=420369 jiffies) 2026-03-21T16:11:11.581 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.172054] Stack: 2026-03-21T16:11:11.583 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.172054] Call Trace: 2026-03-21T16:11:11.599 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.172054] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:11:11.605 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.192241] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T16:11:11.606 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.168028] Stack: 2026-03-21T16:11:11.608 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.168028] Call Trace: 2026-03-21T16:11:11.609 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.168028] 2026-03-21T16:11:11.610 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.168028] 2026-03-21T16:11:11.624 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.168028] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:11:11.626 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.192545] Stack: 2026-03-21T16:11:11.628 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.192548] Call Trace: 2026-03-21T16:11:11.641 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.192557] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:11:11.643 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.192625] Stack: 2026-03-21T16:11:11.644 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.192630] Call Trace: 2026-03-21T16:11:11.658 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.192636] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:11:11.659 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.196235] Stack: 2026-03-21T16:11:11.661 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.196235] Call Trace: 2026-03-21T16:11:11.662 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.196235] 2026-03-21T16:11:11.664 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.196235] 2026-03-21T16:11:11.681 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.196235] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:11:11.683 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.274256] Stack: 2026-03-21T16:11:11.684 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.275466] Call Trace: 2026-03-21T16:11:11.699 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.276031] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:11:11.701 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.276047] Stack: 2026-03-21T16:11:11.702 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.276047] Call Trace: 2026-03-21T16:11:11.720 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.276047] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:11:11.722 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.276047] Stack: 2026-03-21T16:11:11.723 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.276047] Call Trace: 2026-03-21T16:11:11.739 INFO:tasks.qemu.client.0.vm05.stdout:[ 2703.276047] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:11:36.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2728.152029] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:11:36.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2728.155374] Stack: 2026-03-21T16:11:36.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2728.156025] Call Trace: 2026-03-21T16:11:36.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 2728.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:12:04.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2756.152028] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:12:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2756.155295] Stack: 2026-03-21T16:12:04.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2756.156024] Call Trace: 2026-03-21T16:12:04.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 2756.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:12:32.562 INFO:tasks.qemu.client.0.vm05.stdout:[ 2784.152025] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:12:32.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2784.155186] Stack: 2026-03-21T16:12:32.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2784.156022] Call Trace: 2026-03-21T16:12:32.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 2784.156022] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:13:00.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2812.152028] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:13:00.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2812.155302] Stack: 2026-03-21T16:13:00.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2812.156024] Call Trace: 2026-03-21T16:13:00.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 2812.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:13:28.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2840.152027] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:13:28.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2840.155245] Stack: 2026-03-21T16:13:28.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2840.156023] Call Trace: 2026-03-21T16:13:28.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 2840.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:13:56.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2868.152024] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:13:56.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2868.155195] Stack: 2026-03-21T16:13:56.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2868.156021] Call Trace: 2026-03-21T16:13:56.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 2868.156021] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:14:11.727 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.316025] INFO: rcu_sched detected stall on CPU 1 (t=465406 jiffies) 2026-03-21T16:14:11.728 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.316025] Stack: 2026-03-21T16:14:11.729 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.316025] Call Trace: 2026-03-21T16:14:11.730 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.316025] 2026-03-21T16:14:11.731 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.316025] 2026-03-21T16:14:11.744 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.316025] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:14:11.745 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.320091] Stack: 2026-03-21T16:14:11.747 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.320091] Call Trace: 2026-03-21T16:14:11.763 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.320091] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:14:11.767 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.356092] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T16:14:11.768 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.340037] Stack: 2026-03-21T16:14:11.770 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.340037] Call Trace: 2026-03-21T16:14:11.781 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.340037] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:14:11.782 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.340053] Stack: 2026-03-21T16:14:11.784 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.340053] Call Trace: 2026-03-21T16:14:11.795 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.340053] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:14:11.797 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.360088] Stack: 2026-03-21T16:14:11.798 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.360088] Call Trace: 2026-03-21T16:14:11.800 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.360088] 2026-03-21T16:14:11.801 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.360088] 2026-03-21T16:14:11.817 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.360088] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:14:11.818 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.409626] Stack: 2026-03-21T16:14:11.819 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.410718] Call Trace: 2026-03-21T16:14:11.832 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.411999] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:14:11.834 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.412025] Stack: 2026-03-21T16:14:11.835 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.412025] Call Trace: 2026-03-21T16:14:11.852 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.412025] Code: 41 8d b6 cf 00 00 00 49 8d 7d 18 ff 90 d0 00 00 00 49 83 bc 24 98 90 e0 81 00 0f 84 74 ff ff ff 66 0f 1f 84 00 00 00 00 00 f3 90 <49> 83 7d 18 00 75 f7 e9 5d ff ff ff 66 90 55 48 89 e5 66 66 66 2026-03-21T16:14:11.853 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.412055] Stack: 2026-03-21T16:14:11.854 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.412055] Call Trace: 2026-03-21T16:14:11.867 INFO:tasks.qemu.client.0.vm05.stdout:[ 2883.412055] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:14:36.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2908.152027] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:14:36.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2908.155202] Stack: 2026-03-21T16:14:36.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2908.156024] Call Trace: 2026-03-21T16:14:36.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 2908.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:15:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2936.152028] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:15:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2936.155300] Stack: 2026-03-21T16:15:04.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2936.156025] Call Trace: 2026-03-21T16:15:04.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 2936.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:15:32.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2964.152033] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:15:32.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2964.155684] Stack: 2026-03-21T16:15:32.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 2964.156028] Call Trace: 2026-03-21T16:15:32.580 INFO:tasks.qemu.client.0.vm05.stdout:[ 2964.156028] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:16:00.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 2992.152030] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:16:00.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 2992.155494] Stack: 2026-03-21T16:16:00.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 2992.156026] Call Trace: 2026-03-21T16:16:00.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 2992.156026] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:16:28.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3020.152030] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:16:28.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3020.155483] Stack: 2026-03-21T16:16:28.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3020.156027] Call Trace: 2026-03-21T16:16:28.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3020.156027] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:16:56.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3048.152030] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:16:56.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3048.155435] Stack: 2026-03-21T16:16:56.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3048.156026] Call Trace: 2026-03-21T16:16:56.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 3048.156026] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:17:11.891 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.480027] INFO: rcu_sched detected stall on CPU 1 (t=510447 jiffies) 2026-03-21T16:17:11.893 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.483518] Stack: 2026-03-21T16:17:11.894 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.484054] Call Trace: 2026-03-21T16:17:11.910 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.484054] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:17:11.913 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.501961] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T16:17:11.914 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.480027] Stack: 2026-03-21T16:17:11.915 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.480027] Call Trace: 2026-03-21T16:17:11.917 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.480027] 2026-03-21T16:17:11.918 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.480027] 2026-03-21T16:17:11.929 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.480027] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:17:11.930 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.502219] Stack: 2026-03-21T16:17:11.932 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.502222] Call Trace: 2026-03-21T16:17:11.942 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.502227] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:17:11.943 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.502328] Stack: 2026-03-21T16:17:11.945 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.502333] Call Trace: 2026-03-21T16:17:11.957 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.502341] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:17:11.958 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.505959] Stack: 2026-03-21T16:17:11.960 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.505959] Call Trace: 2026-03-21T16:17:11.961 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.505959] 2026-03-21T16:17:11.967 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.505959] 2026-03-21T16:17:11.982 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.505959] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:17:11.983 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.574532] Stack: 2026-03-21T16:17:11.984 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.575625] Call Trace: 2026-03-21T16:17:11.998 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.576025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:17:11.999 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.576040] Stack: 2026-03-21T16:17:12.001 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.576040] Call Trace: 2026-03-21T16:17:12.018 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.576040] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:17:12.019 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.576050] Stack: 2026-03-21T16:17:12.020 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.576050] Call Trace: 2026-03-21T16:17:12.034 INFO:tasks.qemu.client.0.vm05.stdout:[ 3063.576050] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:17:36.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3088.152030] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:17:36.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3088.155477] Stack: 2026-03-21T16:17:36.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3088.156026] Call Trace: 2026-03-21T16:17:36.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3088.156026] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:18:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3116.152031] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:18:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3116.155566] Stack: 2026-03-21T16:18:04.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3116.156027] Call Trace: 2026-03-21T16:18:04.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3116.156027] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:18:32.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3144.152034] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:18:32.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3144.155437] Stack: 2026-03-21T16:18:32.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3144.156030] Call Trace: 2026-03-21T16:18:32.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 3144.156030] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:19:00.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3172.152029] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:19:00.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3172.155381] Stack: 2026-03-21T16:19:00.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3172.156025] Call Trace: 2026-03-21T16:19:00.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 3172.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:19:28.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3200.152036] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:19:28.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3200.155744] Stack: 2026-03-21T16:19:28.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3200.156031] Call Trace: 2026-03-21T16:19:28.583 INFO:tasks.qemu.client.0.vm05.stdout:[ 3200.156031] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:19:56.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3228.152040] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:19:56.567 INFO:tasks.qemu.client.0.vm05.stdout:[ 3228.156033] Stack: 2026-03-21T16:19:56.568 INFO:tasks.qemu.client.0.vm05.stdout:[ 3228.156033] Call Trace: 2026-03-21T16:19:56.582 INFO:tasks.qemu.client.0.vm05.stdout:[ 3228.156033] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:20:12.036 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.624025] INFO: rcu_sched detected stall on CPU 1 (t=555483 jiffies) 2026-03-21T16:20:12.037 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.627834] Stack: 2026-03-21T16:20:12.038 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.628039] Call Trace: 2026-03-21T16:20:12.053 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.628039] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:20:12.056 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.645185] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T16:20:12.057 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.624025] Stack: 2026-03-21T16:20:12.058 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.624025] Call Trace: 2026-03-21T16:20:12.060 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.624025] 2026-03-21T16:20:12.061 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.624025] 2026-03-21T16:20:12.071 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.624025] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:20:12.072 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.645493] Stack: 2026-03-21T16:20:12.074 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.645496] Call Trace: 2026-03-21T16:20:12.085 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.645501] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:20:12.086 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.645570] Stack: 2026-03-21T16:20:12.088 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.645572] Call Trace: 2026-03-21T16:20:12.099 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.645576] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:20:12.101 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.649184] Stack: 2026-03-21T16:20:12.102 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.649184] Call Trace: 2026-03-21T16:20:12.104 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.649184] 2026-03-21T16:20:12.105 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.649184] 2026-03-21T16:20:12.120 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.649184] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:20:12.121 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.712497] Stack: 2026-03-21T16:20:12.123 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.713556] Call Trace: 2026-03-21T16:20:12.136 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.714877] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:20:12.137 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.716028] Stack: 2026-03-21T16:20:12.139 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.716028] Call Trace: 2026-03-21T16:20:12.154 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.716028] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:20:12.155 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.716048] Stack: 2026-03-21T16:20:12.156 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.716048] Call Trace: 2026-03-21T16:20:12.169 INFO:tasks.qemu.client.0.vm05.stdout:[ 3243.716048] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:20:36.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3268.152046] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:20:36.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3268.155365] Stack: 2026-03-21T16:20:36.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3268.156042] Call Trace: 2026-03-21T16:20:36.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3268.156042] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:21:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3296.152030] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:21:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3296.155387] Stack: 2026-03-21T16:21:04.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3296.156027] Call Trace: 2026-03-21T16:21:04.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3296.156027] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:21:32.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3324.152026] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:21:32.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3324.155318] Stack: 2026-03-21T16:21:32.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3324.156024] Call Trace: 2026-03-21T16:21:32.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3324.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:22:00.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3352.152037] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:22:00.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3352.155438] Stack: 2026-03-21T16:22:00.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3352.156034] Call Trace: 2026-03-21T16:22:00.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3352.156034] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:22:28.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3380.152026] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:22:28.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3380.155330] Stack: 2026-03-21T16:22:28.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3380.156023] Call Trace: 2026-03-21T16:22:28.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3380.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:22:56.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3408.152032] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:22:56.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3408.155203] Stack: 2026-03-21T16:22:56.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3408.156029] Call Trace: 2026-03-21T16:22:56.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 3408.156029] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:23:12.179 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.768021] INFO: rcu_sched detected stall on CPU 1 (t=600519 jiffies) 2026-03-21T16:23:12.181 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.771443] Stack: 2026-03-21T16:23:12.182 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.772051] Call Trace: 2026-03-21T16:23:12.198 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.772051] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:23:12.202 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.790271] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T16:23:12.203 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.768021] Stack: 2026-03-21T16:23:12.205 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.768021] Call Trace: 2026-03-21T16:23:12.206 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.768021] 2026-03-21T16:23:12.207 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.768021] 2026-03-21T16:23:12.218 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.768021] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:23:12.220 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.790524] Stack: 2026-03-21T16:23:12.221 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.790526] Call Trace: 2026-03-21T16:23:12.233 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.790533] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:23:12.234 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.790584] Stack: 2026-03-21T16:23:12.235 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.790587] Call Trace: 2026-03-21T16:23:12.251 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.790592] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:23:12.252 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.794269] Stack: 2026-03-21T16:23:12.254 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.794269] Call Trace: 2026-03-21T16:23:12.260 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.794269] 2026-03-21T16:23:12.260 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.794269] 2026-03-21T16:23:12.271 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.794269] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:23:12.272 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.862757] Stack: 2026-03-21T16:23:12.273 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.863803] Call Trace: 2026-03-21T16:23:12.286 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.864021] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:23:12.287 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.864051] Stack: 2026-03-21T16:23:12.289 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.864051] Call Trace: 2026-03-21T16:23:12.305 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.864051] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:23:12.306 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.864050] Stack: 2026-03-21T16:23:12.307 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.864050] Call Trace: 2026-03-21T16:23:12.320 INFO:tasks.qemu.client.0.vm05.stdout:[ 3423.864050] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:23:36.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3448.152025] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:23:36.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3448.155414] Stack: 2026-03-21T16:23:36.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3448.156021] Call Trace: 2026-03-21T16:23:36.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3448.156021] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:24:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3476.152026] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:24:04.576 INFO:tasks.qemu.client.0.vm05.stdout:[ 3476.155186] Stack: 2026-03-21T16:24:04.576 INFO:tasks.qemu.client.0.vm05.stdout:[ 3476.156023] Call Trace: 2026-03-21T16:24:04.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 3476.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:24:32.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3504.152026] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:24:32.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3504.155214] Stack: 2026-03-21T16:24:32.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3504.156023] Call Trace: 2026-03-21T16:24:32.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 3504.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:25:00.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3532.152029] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:25:00.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3532.155207] Stack: 2026-03-21T16:25:00.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3532.156025] Call Trace: 2026-03-21T16:25:00.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 3532.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:25:28.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3560.152027] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:25:28.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3560.155228] Stack: 2026-03-21T16:25:28.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3560.156024] Call Trace: 2026-03-21T16:25:28.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 3560.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:25:56.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3588.152036] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:25:56.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3588.155210] Stack: 2026-03-21T16:25:56.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3588.156033] Call Trace: 2026-03-21T16:25:56.578 INFO:tasks.qemu.client.0.vm05.stdout:[ 3588.156033] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:26:12.323 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.912025] INFO: rcu_sched detected stall on CPU 1 (t=645555 jiffies) 2026-03-21T16:26:12.324 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.912025] Stack: 2026-03-21T16:26:12.326 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.912025] Call Trace: 2026-03-21T16:26:12.327 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.912025] 2026-03-21T16:26:12.328 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.912025] 2026-03-21T16:26:12.340 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.912025] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:26:12.342 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.916040] Stack: 2026-03-21T16:26:12.343 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.916040] Call Trace: 2026-03-21T16:26:12.359 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.916040] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:26:12.364 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.950617] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T16:26:12.365 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.936054] Stack: 2026-03-21T16:26:12.366 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.936054] Call Trace: 2026-03-21T16:26:12.377 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.936054] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:26:12.379 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.936050] Stack: 2026-03-21T16:26:12.380 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.936050] Call Trace: 2026-03-21T16:26:12.392 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.936050] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:26:12.393 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.954613] Stack: 2026-03-21T16:26:12.395 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.954613] Call Trace: 2026-03-21T16:26:12.396 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.954613] 2026-03-21T16:26:12.397 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.954613] 2026-03-21T16:26:12.413 INFO:tasks.qemu.client.0.vm05.stdout:[ 3603.954613] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:26:12.414 INFO:tasks.qemu.client.0.vm05.stdout:[ 3604.004440] Stack: 2026-03-21T16:26:12.415 INFO:tasks.qemu.client.0.vm05.stdout:[ 3604.005464] Call Trace: 2026-03-21T16:26:12.428 INFO:tasks.qemu.client.0.vm05.stdout:[ 3604.006709] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:26:12.429 INFO:tasks.qemu.client.0.vm05.stdout:[ 3604.008048] Stack: 2026-03-21T16:26:12.430 INFO:tasks.qemu.client.0.vm05.stdout:[ 3604.008048] Call Trace: 2026-03-21T16:26:12.446 INFO:tasks.qemu.client.0.vm05.stdout:[ 3604.008048] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:26:12.447 INFO:tasks.qemu.client.0.vm05.stdout:[ 3604.008042] Stack: 2026-03-21T16:26:12.448 INFO:tasks.qemu.client.0.vm05.stdout:[ 3604.008042] Call Trace: 2026-03-21T16:26:12.460 INFO:tasks.qemu.client.0.vm05.stdout:[ 3604.008042] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:26:36.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3628.152025] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:26:36.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3628.154418] Stack: 2026-03-21T16:26:36.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3628.155186] Call Trace: 2026-03-21T16:26:36.575 INFO:tasks.qemu.client.0.vm05.stdout:[ 3628.156022] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:27:04.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3656.152026] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:27:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3656.154363] Stack: 2026-03-21T16:27:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3656.155116] Call Trace: 2026-03-21T16:27:04.575 INFO:tasks.qemu.client.0.vm05.stdout:[ 3656.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:27:32.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3684.152030] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:27:32.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3684.154354] Stack: 2026-03-21T16:27:32.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3684.155098] Call Trace: 2026-03-21T16:27:32.574 INFO:tasks.qemu.client.0.vm05.stdout:[ 3684.156026] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:28:00.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 3712.152028] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:28:00.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3712.154416] Stack: 2026-03-21T16:28:00.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3712.155193] Call Trace: 2026-03-21T16:28:00.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 3712.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:28:28.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3740.152028] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:28:28.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3740.155333] Stack: 2026-03-21T16:28:28.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3740.156024] Call Trace: 2026-03-21T16:28:28.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3740.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:28:56.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3768.152027] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:28:56.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3768.155179] Stack: 2026-03-21T16:28:56.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3768.156024] Call Trace: 2026-03-21T16:28:56.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3768.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:29:12.484 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.072024] INFO: rcu_sched detected stall on CPU 1 (t=690596 jiffies) 2026-03-21T16:29:12.485 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.072024] Stack: 2026-03-21T16:29:12.486 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.072024] Call Trace: 2026-03-21T16:29:12.487 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.072024] 2026-03-21T16:29:12.488 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.072024] 2026-03-21T16:29:12.501 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.072024] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:29:12.503 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.076031] Stack: 2026-03-21T16:29:12.505 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.076031] Call Trace: 2026-03-21T16:29:12.520 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.076031] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:29:12.523 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.111405] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T16:29:12.525 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.096045] Stack: 2026-03-21T16:29:12.526 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.096045] Call Trace: 2026-03-21T16:29:12.539 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.096045] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:29:12.540 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.096048] Stack: 2026-03-21T16:29:12.541 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.096048] Call Trace: 2026-03-21T16:29:12.553 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.096048] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:29:12.555 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.115402] Stack: 2026-03-21T16:29:12.556 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.115402] Call Trace: 2026-03-21T16:29:12.558 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.115402] 2026-03-21T16:29:12.559 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.115402] 2026-03-21T16:29:12.575 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.115402] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:29:12.576 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.166326] Stack: 2026-03-21T16:29:12.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.167353] Call Trace: 2026-03-21T16:29:12.591 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.168023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:29:12.593 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.168044] Stack: 2026-03-21T16:29:12.594 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.168044] Call Trace: 2026-03-21T16:29:12.607 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.168044] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:29:12.609 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.168058] Stack: 2026-03-21T16:29:12.610 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.168058] Call Trace: 2026-03-21T16:29:12.626 INFO:tasks.qemu.client.0.vm05.stdout:[ 3784.168058] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:29:40.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3812.152024] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:29:40.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3812.155295] Stack: 2026-03-21T16:29:40.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3812.156021] Call Trace: 2026-03-21T16:29:40.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3812.156021] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:30:08.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3840.152026] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:30:08.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3840.155291] Stack: 2026-03-21T16:30:08.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3840.156023] Call Trace: 2026-03-21T16:30:08.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3840.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:30:36.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3868.152028] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:30:36.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3868.155289] Stack: 2026-03-21T16:30:36.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3868.156025] Call Trace: 2026-03-21T16:30:36.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3868.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:31:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3896.152027] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:31:04.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3896.155257] Stack: 2026-03-21T16:31:04.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3896.156024] Call Trace: 2026-03-21T16:31:04.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3896.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:31:32.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3924.152032] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:31:32.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3924.155492] Stack: 2026-03-21T16:31:32.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3924.156029] Call Trace: 2026-03-21T16:31:32.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3924.156029] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:32:00.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3952.152025] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:32:00.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3952.155260] Stack: 2026-03-21T16:32:00.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3952.156022] Call Trace: 2026-03-21T16:32:00.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3952.156022] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:32:12.644 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.232024] INFO: rcu_sched detected stall on CPU 1 (t=735635 jiffies) 2026-03-21T16:32:12.645 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.232024] Stack: 2026-03-21T16:32:12.646 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.232024] Call Trace: 2026-03-21T16:32:12.648 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.232024] 2026-03-21T16:32:12.649 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.232024] 2026-03-21T16:32:12.662 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.232024] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:32:12.663 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.236061] Stack: 2026-03-21T16:32:12.665 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.236061] Call Trace: 2026-03-21T16:32:12.679 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.236061] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:32:12.681 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.269245] INFO: rcu_sched detected stalls on CPUs/tasks: { 1} (detected by 0, t=735645 jiffies) 2026-03-21T16:32:12.682 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.256047] Stack: 2026-03-21T16:32:12.683 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.256047] Call Trace: 2026-03-21T16:32:12.695 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.256047] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:32:12.695 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.256041] Stack: 2026-03-21T16:32:12.696 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.256041] Call Trace: 2026-03-21T16:32:12.707 INFO:tasks.qemu.client.0.vm05.stdout:[ 3964.256041] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:32:40.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 3992.152030] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:32:40.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 3992.155341] Stack: 2026-03-21T16:32:40.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 3992.156026] Call Trace: 2026-03-21T16:32:40.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 3992.156026] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:33:08.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4020.152040] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:33:08.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 4020.156035] Stack: 2026-03-21T16:33:08.567 INFO:tasks.qemu.client.0.vm05.stdout:[ 4020.156035] Call Trace: 2026-03-21T16:33:08.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 4020.156035] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:33:36.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 4048.152027] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:33:36.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 4048.154376] Stack: 2026-03-21T16:33:36.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4048.155287] Call Trace: 2026-03-21T16:33:36.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 4048.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:34:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 4076.152027] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:34:04.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 4076.154306] Stack: 2026-03-21T16:34:04.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4076.154990] Call Trace: 2026-03-21T16:34:04.576 INFO:tasks.qemu.client.0.vm05.stdout:[ 4076.155941] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:34:32.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 4104.152026] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:34:32.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 4104.154642] Stack: 2026-03-21T16:34:32.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 4104.155587] Call Trace: 2026-03-21T16:34:32.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 4104.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:35:00.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 4132.152028] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:35:00.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 4132.154443] Stack: 2026-03-21T16:35:00.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4132.155324] Call Trace: 2026-03-21T16:35:00.577 INFO:tasks.qemu.client.0.vm05.stdout:[ 4132.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:35:12.803 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.392024] INFO: rcu_sched detected stall on CPU 1 (t=780675 jiffies) 2026-03-21T16:35:12.804 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.392024] Stack: 2026-03-21T16:35:12.805 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.392024] Call Trace: 2026-03-21T16:35:12.806 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.392024] 2026-03-21T16:35:12.807 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.392024] 2026-03-21T16:35:12.818 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.392024] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:35:12.836 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.396164] Stack: 2026-03-21T16:35:12.837 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.396164] Call Trace: 2026-03-21T16:35:12.860 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.396164] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:35:12.863 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.451278] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T16:35:12.864 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.412046] Stack: 2026-03-21T16:35:12.865 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.412046] Call Trace: 2026-03-21T16:35:12.873 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.412046] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:35:12.875 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.412042] Stack: 2026-03-21T16:35:12.875 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.412042] Call Trace: 2026-03-21T16:35:12.884 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.412042] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:35:12.885 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.455277] Stack: 2026-03-21T16:35:12.886 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.455277] Call Trace: 2026-03-21T16:35:12.887 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.455277] 2026-03-21T16:35:12.888 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.455277] 2026-03-21T16:35:12.900 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.455277] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:35:12.901 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.491002] Stack: 2026-03-21T16:35:12.903 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.492033] Call Trace: 2026-03-21T16:35:12.916 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.492033] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:35:12.918 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.492052] Stack: 2026-03-21T16:35:12.919 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.492052] Call Trace: 2026-03-21T16:35:12.934 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.492052] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:35:12.935 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.492080] Stack: 2026-03-21T16:35:12.936 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.492080] Call Trace: 2026-03-21T16:35:12.946 INFO:tasks.qemu.client.0.vm05.stdout:[ 4144.492080] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:35:40.563 INFO:tasks.qemu.client.0.vm05.stdout:[ 4172.152026] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:35:40.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 4172.154322] Stack: 2026-03-21T16:35:40.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4172.155276] Call Trace: 2026-03-21T16:35:40.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 4172.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:36:08.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4200.152026] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:36:08.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 4200.155690] Stack: 2026-03-21T16:36:08.567 INFO:tasks.qemu.client.0.vm05.stdout:[ 4200.156023] Call Trace: 2026-03-21T16:36:08.581 INFO:tasks.qemu.client.0.vm05.stdout:[ 4200.156023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:36:36.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4228.152045] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:36:36.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 4228.155713] Stack: 2026-03-21T16:36:36.567 INFO:tasks.qemu.client.0.vm05.stdout:[ 4228.156028] Call Trace: 2026-03-21T16:36:36.581 INFO:tasks.qemu.client.0.vm05.stdout:[ 4228.156028] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:37:04.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4256.152023] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:37:04.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 4256.155852] Stack: 2026-03-21T16:37:04.567 INFO:tasks.qemu.client.0.vm05.stdout:[ 4256.156020] Call Trace: 2026-03-21T16:37:04.582 INFO:tasks.qemu.client.0.vm05.stdout:[ 4256.156020] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:37:32.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4284.152028] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:37:32.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 4284.155838] Stack: 2026-03-21T16:37:32.567 INFO:tasks.qemu.client.0.vm05.stdout:[ 4284.156025] Call Trace: 2026-03-21T16:37:32.581 INFO:tasks.qemu.client.0.vm05.stdout:[ 4284.156025] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:38:00.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4312.152027] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:38:00.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 4312.155685] Stack: 2026-03-21T16:38:00.567 INFO:tasks.qemu.client.0.vm05.stdout:[ 4312.156024] Call Trace: 2026-03-21T16:38:00.581 INFO:tasks.qemu.client.0.vm05.stdout:[ 4312.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:38:12.984 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.572024] INFO: rcu_sched detected stall on CPU 1 (t=825720 jiffies) 2026-03-21T16:38:12.985 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.572024] Stack: 2026-03-21T16:38:12.986 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.572024] Call Trace: 2026-03-21T16:38:12.987 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.572024] 2026-03-21T16:38:12.989 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.572024] 2026-03-21T16:38:13.001 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.572024] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:38:13.016 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.576063] Stack: 2026-03-21T16:38:13.018 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.576063] Call Trace: 2026-03-21T16:38:13.038 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.576063] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:38:13.041 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.629401] INFO: rcu_sched detected stalls on CPUs/tasks: { 2026-03-21T16:38:13.042 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.592051] Stack: 2026-03-21T16:38:13.044 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.592051] Call Trace: 2026-03-21T16:38:13.053 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.592051] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:38:13.054 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.592074] Stack: 2026-03-21T16:38:13.056 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.592074] Call Trace: 2026-03-21T16:38:13.065 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.592074] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:38:13.066 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.633400] Stack: 2026-03-21T16:38:13.067 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.633400] Call Trace: 2026-03-21T16:38:13.068 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.633400] 2026-03-21T16:38:13.070 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.633400] 2026-03-21T16:38:13.084 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.633400] Code: c0 89 c7 48 89 d0 44 89 06 48 c1 e0 20 89 f9 48 09 c8 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 55 89 f0 89 f9 48 89 e5 0f 30 <31> c0 5d c3 66 90 55 48 89 e5 66 66 66 66 90 89 f9 0f 33 89 c7 2026-03-21T16:38:13.085 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.674777] Stack: 2026-03-21T16:38:13.086 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.675764] Call Trace: 2026-03-21T16:38:13.098 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.676023] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:38:13.100 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.676046] Stack: 2026-03-21T16:38:13.101 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.676046] Call Trace: 2026-03-21T16:38:13.115 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.676046] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:38:13.116 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.676039] Stack: 2026-03-21T16:38:13.117 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.676039] Call Trace: 2026-03-21T16:38:13.129 INFO:tasks.qemu.client.0.vm05.stdout:[ 4324.676039] Code: 55 48 89 e5 66 66 66 66 90 fa 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb 5d c3 0f 1f 40 00 55 48 89 e5 66 66 66 66 90 fb f4 <5d> c3 0f 1f 00 55 48 89 e5 66 66 66 66 90 f4 5d c3 0f 1f 40 00 2026-03-21T16:38:40.564 INFO:tasks.qemu.client.0.vm05.stdout:[ 4352.152027] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:38:40.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4352.155140] Stack: 2026-03-21T16:38:40.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 4352.156024] Call Trace: 2026-03-21T16:38:40.579 INFO:tasks.qemu.client.0.vm05.stdout:[ 4352.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:39:08.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4380.152030] BUG: soft lockup - CPU#1 stuck for 23s! [kworker/1:1:33] 2026-03-21T16:39:08.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 4380.155477] Stack: 2026-03-21T16:39:08.567 INFO:tasks.qemu.client.0.vm05.stdout:[ 4380.156027] Call Trace: 2026-03-21T16:39:08.581 INFO:tasks.qemu.client.0.vm05.stdout:[ 4380.156027] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:39:36.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4408.152023] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:39:36.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 4408.155464] Stack: 2026-03-21T16:39:36.567 INFO:tasks.qemu.client.0.vm05.stdout:[ 4408.156020] Call Trace: 2026-03-21T16:39:36.581 INFO:tasks.qemu.client.0.vm05.stdout:[ 4408.156020] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:40:04.565 INFO:tasks.qemu.client.0.vm05.stdout:[ 4436.152028] BUG: soft lockup - CPU#1 stuck for 22s! [kworker/1:1:33] 2026-03-21T16:40:04.566 INFO:tasks.qemu.client.0.vm05.stdout:[ 4436.155683] Stack: 2026-03-21T16:40:04.567 INFO:tasks.qemu.client.0.vm05.stdout:[ 4436.156024] Call Trace: 2026-03-21T16:40:04.581 INFO:tasks.qemu.client.0.vm05.stdout:[ 4436.156024] Code: dd fe ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 55 b8 00 01 00 00 48 89 e5 f0 66 0f c1 07 0f b6 d4 38 c2 74 0c 0f 1f 00 f3 90 <0f> b6 07 38 d0 75 f7 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 2026-03-21T16:40:20.501 DEBUG:teuthology.exit:Got signal 15; running 1 handler... 2026-03-21T16:40:20.502 DEBUG:teuthology.exit:Finished running handlers