2026-03-21T12:28:42.860 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-21T12:28:42.864 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-21T12:28:42.881 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-20_22:04:26-rbd-tentacle-none-default-vps/3480 branch: tentacle description: rbd/cli/{base/install clusters/{fixed-1} conf/{disable-pool-app} data-pool/replicated features/layering msgr-failures/few objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} workloads/rbd_support_module_recovery} email: null first_in_suite: false flavor: default job_id: '3480' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-20_22:04:26-rbd-tentacle-none-default-vps no_nested_subset: false os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: tentacle ansible.cephlab: branch: main repo: https://github.com/kshtsk/ceph-cm-ansible.git skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: logical_volumes: lv_1: scratch_dev: true size: 25%VG vg: vg_nvme lv_2: scratch_dev: true size: 25%VG vg: vg_nvme lv_3: scratch_dev: true size: 25%VG vg: vg_nvme lv_4: scratch_dev: true size: 25%VG vg: vg_nvme timezone: UTC volume_groups: vg_nvme: pvs: /dev/vdb,/dev/vdc,/dev/vdd,/dev/vde ceph: conf: client: rbd default data pool: datapool rbd default features: 1 global: mon client directed command retry: 5 mon warn on pool no app: false ms inject socket failures: 5000 mgr: debug mgr: 20 debug ms: 1 debug rbd: 20 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: bluestore block size: 96636764160 bluestore compression algorithm: zlib bluestore compression mode: aggressive bluestore fsck on mount: true debug bluefs: 1/20 debug bluestore: 1/20 debug ms: 1 debug osd: 20 debug rocksdb: 4/10 mon osd backfillfull_ratio: 0.85 mon osd full ratio: 0.9 mon osd nearfull ratio: 0.8 osd failsafe full ratio: 0.95 osd mclock iops capacity threshold hdd: 49000 osd objectstore: bluestore osd shutdown pgref assert: true flavor: default fs: xfs log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - \(OSD_SLOW_PING_TIME sha1: 70f8415b300f041766fa27faf7d5472699e32388 ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log global: osd crush chooseleaf type: 0 osd pool default pg num: 128 osd pool default pgp num: 128 osd pool default size: 2 mon: {} cephadm: cephadm_binary_url: https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm install: ceph: flavor: default sha1: 70f8415b300f041766fa27faf7d5472699e32388 extra_system_packages: deb: - python3-jmespath - python3-xmltodict - s3cmd rpm: - bzip2 - perl-Test-Harness - python3-jmespath - python3-xmltodict - s3cmd thrashosds: bdev_inject_crash: 2 bdev_inject_crash_probability: 0.5 workunit: branch: tt-tentacle sha1: 0392f78529848ec72469e8e431875cb98d3a5fb4 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mgr.x - osd.0 - osd.1 - osd.2 - client.0 seed: 3051 sha1: 70f8415b300f041766fa27faf7d5472699e32388 sleep_before_teardown: 0 subset: 1/128 suite: rbd suite_branch: tt-tentacle suite_path: /home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 0392f78529848ec72469e8e431875cb98d3a5fb4 targets: vm01.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOvxfiM7gfvyd1cIcNHBELq4poOWY0FLMokV9lmIm6/pi3mdF5tLyx+tYT25K2pNt4FTYP4Xd6Rm0kN0xByPURU= tasks: - install: null - ceph: null - exec: client.0: - sudo ceph osd pool create datapool 4 - rbd pool init datapool - install: extra_system_packages: - fio - workunit: clients: client.0: - rbd/rbd_support_module_recovery.sh teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-20_22:04:26 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.4188345 2026-03-21T12:28:42.881 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa; will attempt to use it 2026-03-21T12:28:42.881 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa/tasks 2026-03-21T12:28:42.881 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-21T12:28:42.882 INFO:teuthology.task.internal:Checking packages... 2026-03-21T12:28:42.882 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash '70f8415b300f041766fa27faf7d5472699e32388' 2026-03-21T12:28:42.882 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-21T12:28:42.882 INFO:teuthology.packaging:ref: None 2026-03-21T12:28:42.882 INFO:teuthology.packaging:tag: None 2026-03-21T12:28:42.882 INFO:teuthology.packaging:branch: tentacle 2026-03-21T12:28:42.882 INFO:teuthology.packaging:sha1: 70f8415b300f041766fa27faf7d5472699e32388 2026-03-21T12:28:42.882 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=tentacle 2026-03-21T12:28:43.710 INFO:teuthology.task.internal:Found packages for ceph version 20.2.0-714-g147f7c6a-1jammy 2026-03-21T12:28:43.711 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-21T12:28:43.712 INFO:teuthology.task.internal:no buildpackages task found 2026-03-21T12:28:43.712 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-21T12:28:43.712 INFO:teuthology.task.internal:Saving configuration 2026-03-21T12:28:43.717 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-21T12:28:43.717 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-21T12:28:43.723 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm01.local', 'description': '/archive/kyr-2026-03-20_22:04:26-rbd-tentacle-none-default-vps/3480', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-21 12:28:06.207507', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:01', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOvxfiM7gfvyd1cIcNHBELq4poOWY0FLMokV9lmIm6/pi3mdF5tLyx+tYT25K2pNt4FTYP4Xd6Rm0kN0xByPURU='} 2026-03-21T12:28:43.723 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-21T12:28:43.724 INFO:teuthology.task.internal:roles: ubuntu@vm01.local - ['mon.a', 'mgr.x', 'osd.0', 'osd.1', 'osd.2', 'client.0'] 2026-03-21T12:28:43.724 INFO:teuthology.run_tasks:Running task console_log... 2026-03-21T12:28:43.729 DEBUG:teuthology.task.console_log:vm01 does not support IPMI; excluding 2026-03-21T12:28:43.729 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f180f804940>, signals=[15]) 2026-03-21T12:28:43.729 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-21T12:28:43.730 INFO:teuthology.task.internal:Opening connections... 2026-03-21T12:28:43.730 DEBUG:teuthology.task.internal:connecting to ubuntu@vm01.local 2026-03-21T12:28:43.730 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-21T12:28:43.790 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-21T12:28:43.791 DEBUG:teuthology.orchestra.run.vm01:> uname -m 2026-03-21T12:28:43.924 INFO:teuthology.orchestra.run.vm01.stdout:x86_64 2026-03-21T12:28:43.925 DEBUG:teuthology.orchestra.run.vm01:> cat /etc/os-release 2026-03-21T12:28:43.968 INFO:teuthology.orchestra.run.vm01.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-21T12:28:43.968 INFO:teuthology.orchestra.run.vm01.stdout:NAME="Ubuntu" 2026-03-21T12:28:43.968 INFO:teuthology.orchestra.run.vm01.stdout:VERSION_ID="22.04" 2026-03-21T12:28:43.968 INFO:teuthology.orchestra.run.vm01.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-21T12:28:43.968 INFO:teuthology.orchestra.run.vm01.stdout:VERSION_CODENAME=jammy 2026-03-21T12:28:43.968 INFO:teuthology.orchestra.run.vm01.stdout:ID=ubuntu 2026-03-21T12:28:43.968 INFO:teuthology.orchestra.run.vm01.stdout:ID_LIKE=debian 2026-03-21T12:28:43.968 INFO:teuthology.orchestra.run.vm01.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-21T12:28:43.968 INFO:teuthology.orchestra.run.vm01.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-21T12:28:43.968 INFO:teuthology.orchestra.run.vm01.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-21T12:28:43.968 INFO:teuthology.orchestra.run.vm01.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-21T12:28:43.968 INFO:teuthology.orchestra.run.vm01.stdout:UBUNTU_CODENAME=jammy 2026-03-21T12:28:43.969 INFO:teuthology.lock.ops:Updating vm01.local on lock server 2026-03-21T12:28:43.973 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-21T12:28:43.974 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-21T12:28:43.975 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-21T12:28:43.975 DEBUG:teuthology.orchestra.run.vm01:> test '!' -e /home/ubuntu/cephtest 2026-03-21T12:28:44.012 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-21T12:28:44.013 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-21T12:28:44.013 DEBUG:teuthology.orchestra.run.vm01:> test -z $(ls -A /var/lib/ceph) 2026-03-21T12:28:44.061 INFO:teuthology.orchestra.run.vm01.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-21T12:28:44.061 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-21T12:28:44.068 DEBUG:teuthology.orchestra.run.vm01:> test -e /ceph-qa-ready 2026-03-21T12:28:44.108 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-21T12:28:44.341 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-21T12:28:44.343 INFO:teuthology.task.internal:Creating test directory... 2026-03-21T12:28:44.343 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-21T12:28:44.346 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-21T12:28:44.347 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-21T12:28:44.348 INFO:teuthology.task.internal:Creating archive directory... 2026-03-21T12:28:44.348 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-21T12:28:44.394 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-21T12:28:44.395 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-21T12:28:44.395 DEBUG:teuthology.orchestra.run.vm01:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-21T12:28:44.436 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-21T12:28:44.436 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-21T12:28:44.486 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-21T12:28:44.490 INFO:teuthology.orchestra.run.vm01.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-21T12:28:44.491 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-21T12:28:44.492 INFO:teuthology.task.internal:Configuring sudo... 2026-03-21T12:28:44.493 DEBUG:teuthology.orchestra.run.vm01:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-21T12:28:44.540 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-21T12:28:44.542 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-21T12:28:44.542 DEBUG:teuthology.orchestra.run.vm01:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-21T12:28:44.585 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-21T12:28:44.629 DEBUG:teuthology.orchestra.run.vm01:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-21T12:28:44.673 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:28:44.673 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-21T12:28:44.721 DEBUG:teuthology.orchestra.run.vm01:> sudo service rsyslog restart 2026-03-21T12:28:44.776 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-21T12:28:44.778 INFO:teuthology.task.internal:Starting timer... 2026-03-21T12:28:44.778 INFO:teuthology.run_tasks:Running task pcp... 2026-03-21T12:28:44.781 INFO:teuthology.run_tasks:Running task selinux... 2026-03-21T12:28:44.783 INFO:teuthology.task.selinux:Excluding vm01: VMs are not yet supported 2026-03-21T12:28:44.783 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-21T12:28:44.783 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-21T12:28:44.783 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-21T12:28:44.783 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-21T12:28:44.784 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}} 2026-03-21T12:28:44.784 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/kshtsk/ceph-cm-ansible.git 2026-03-21T12:28:44.785 INFO:teuthology.repo_utils:Fetching github.com_kshtsk_ceph-cm-ansible_main from origin 2026-03-21T12:28:45.269 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main to origin/main 2026-03-21T12:28:45.274 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-21T12:28:45.274 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "logical_volumes": {"lv_1": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_2": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_3": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_4": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}}, "timezone": "UTC", "volume_groups": {"vg_nvme": {"pvs": "/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde"}}}' -i /tmp/teuth_ansible_inventorymk8m7ys4 --limit vm01.local /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-21T12:30:59.485 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm01.local')] 2026-03-21T12:30:59.486 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm01.local' 2026-03-21T12:30:59.486 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm01.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-21T12:30:59.545 DEBUG:teuthology.orchestra.run.vm01:> true 2026-03-21T12:30:59.781 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm01.local' 2026-03-21T12:30:59.781 INFO:teuthology.run_tasks:Running task clock... 2026-03-21T12:30:59.784 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-21T12:30:59.784 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-21T12:30:59.784 DEBUG:teuthology.orchestra.run.vm01:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: Command line: ntpd -gq 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: ---------------------------------------------------- 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: ntp-4 is maintained by Network Time Foundation, 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: corporation. Support and training for ntp-4 are 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: available at https://www.nwtime.org/support 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: ---------------------------------------------------- 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: proto: precision = 0.030 usec (-25) 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: basedate set to 2022-02-04 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: gps base set to 2022-02-06 (week 2196) 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stderr:21 Mar 12:30:59 ntpd[16251]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 84 days ago 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: Listen and drop on 0 v6wildcard [::]:123 2026-03-21T12:30:59.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-21T12:30:59.841 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: Listen normally on 2 lo 127.0.0.1:123 2026-03-21T12:30:59.841 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: Listen normally on 3 ens3 192.168.123.101:123 2026-03-21T12:30:59.841 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: Listen normally on 4 lo [::1]:123 2026-03-21T12:30:59.841 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:1%2]:123 2026-03-21T12:30:59.841 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:30:59 ntpd[16251]: Listening on routing socket on fd #22 for interface updates 2026-03-21T12:31:00.841 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:00 ntpd[16251]: Soliciting pool server 136.243.147.210 2026-03-21T12:31:01.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:01 ntpd[16251]: Soliciting pool server 116.203.218.109 2026-03-21T12:31:01.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:01 ntpd[16251]: Soliciting pool server 80.153.195.191 2026-03-21T12:31:02.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:02 ntpd[16251]: Soliciting pool server 85.121.52.237 2026-03-21T12:31:02.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:02 ntpd[16251]: Soliciting pool server 93.177.65.20 2026-03-21T12:31:02.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:02 ntpd[16251]: Soliciting pool server 162.159.200.123 2026-03-21T12:31:03.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:03 ntpd[16251]: Soliciting pool server 195.201.125.53 2026-03-21T12:31:03.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:03 ntpd[16251]: Soliciting pool server 167.235.70.245 2026-03-21T12:31:03.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:03 ntpd[16251]: Soliciting pool server 213.239.234.28 2026-03-21T12:31:03.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:03 ntpd[16251]: Soliciting pool server 158.101.188.125 2026-03-21T12:31:04.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:04 ntpd[16251]: Soliciting pool server 5.9.19.62 2026-03-21T12:31:04.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:04 ntpd[16251]: Soliciting pool server 116.203.244.102 2026-03-21T12:31:04.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:04 ntpd[16251]: Soliciting pool server 141.144.241.16 2026-03-21T12:31:04.841 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:04 ntpd[16251]: Soliciting pool server 185.125.190.57 2026-03-21T12:31:05.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:05 ntpd[16251]: Soliciting pool server 185.125.190.58 2026-03-21T12:31:05.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:05 ntpd[16251]: Soliciting pool server 88.99.86.9 2026-03-21T12:31:05.840 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:05 ntpd[16251]: Soliciting pool server 77.90.0.148 2026-03-21T12:31:08.863 INFO:teuthology.orchestra.run.vm01.stdout:21 Mar 12:31:08 ntpd[16251]: ntpd: time slew -0.000349 s 2026-03-21T12:31:08.864 INFO:teuthology.orchestra.run.vm01.stdout:ntpd: time slew -0.000349s 2026-03-21T12:31:08.885 INFO:teuthology.orchestra.run.vm01.stdout: remote refid st t when poll reach delay offset jitter 2026-03-21T12:31:08.885 INFO:teuthology.orchestra.run.vm01.stdout:============================================================================== 2026-03-21T12:31:08.885 INFO:teuthology.orchestra.run.vm01.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-21T12:31:08.885 INFO:teuthology.orchestra.run.vm01.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-21T12:31:08.885 INFO:teuthology.orchestra.run.vm01.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-21T12:31:08.885 INFO:teuthology.orchestra.run.vm01.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-21T12:31:08.885 INFO:teuthology.orchestra.run.vm01.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-21T12:31:08.886 INFO:teuthology.run_tasks:Running task install... 2026-03-21T12:31:08.888 DEBUG:teuthology.task.install:project ceph 2026-03-21T12:31:08.888 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-03-21T12:31:08.888 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-03-21T12:31:08.888 INFO:teuthology.task.install:Using flavor: default 2026-03-21T12:31:08.890 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-21T12:31:08.890 INFO:teuthology.task.install:extra packages: [] 2026-03-21T12:31:08.890 DEBUG:teuthology.orchestra.run.vm01:> sudo apt-key list | grep Ceph 2026-03-21T12:31:08.972 INFO:teuthology.orchestra.run.vm01.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-21T12:31:08.994 INFO:teuthology.orchestra.run.vm01.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-21T12:31:08.994 INFO:teuthology.orchestra.run.vm01.stdout:uid [ unknown] Ceph.com (release key) 2026-03-21T12:31:08.994 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-21T12:31:08.994 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-jmespath, python3-xmltodict, s3cmd on remote deb x86_64 2026-03-21T12:31:08.994 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-21T12:31:09.624 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default/ 2026-03-21T12:31:09.624 INFO:teuthology.task.install.deb:Package version is 20.2.0-712-g70f8415b-1jammy 2026-03-21T12:31:10.137 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:10.137 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-21T12:31:10.146 DEBUG:teuthology.orchestra.run.vm01:> sudo apt-get update 2026-03-21T12:31:10.272 INFO:teuthology.orchestra.run.vm01.stdout:Hit:1 http://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-21T12:31:10.275 INFO:teuthology.orchestra.run.vm01.stdout:Hit:2 http://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-21T12:31:10.283 INFO:teuthology.orchestra.run.vm01.stdout:Hit:3 http://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-21T12:31:10.451 INFO:teuthology.orchestra.run.vm01.stdout:Hit:4 http://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-21T12:31:10.858 INFO:teuthology.orchestra.run.vm01.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy InRelease 2026-03-21T12:31:10.971 INFO:teuthology.orchestra.run.vm01.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy Release [7680 B] 2026-03-21T12:31:11.083 INFO:teuthology.orchestra.run.vm01.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-21T12:31:11.195 INFO:teuthology.orchestra.run.vm01.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.8 kB] 2026-03-21T12:31:11.277 INFO:teuthology.orchestra.run.vm01.stdout:Fetched 26.5 kB in 1s (27.4 kB/s) 2026-03-21T12:31:12.005 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-21T12:31:12.018 DEBUG:teuthology.orchestra.run.vm01:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=20.2.0-712-g70f8415b-1jammy cephadm=20.2.0-712-g70f8415b-1jammy ceph-mds=20.2.0-712-g70f8415b-1jammy ceph-mgr=20.2.0-712-g70f8415b-1jammy ceph-common=20.2.0-712-g70f8415b-1jammy ceph-fuse=20.2.0-712-g70f8415b-1jammy ceph-test=20.2.0-712-g70f8415b-1jammy ceph-volume=20.2.0-712-g70f8415b-1jammy radosgw=20.2.0-712-g70f8415b-1jammy python3-rados=20.2.0-712-g70f8415b-1jammy python3-rgw=20.2.0-712-g70f8415b-1jammy python3-cephfs=20.2.0-712-g70f8415b-1jammy python3-rbd=20.2.0-712-g70f8415b-1jammy libcephfs2=20.2.0-712-g70f8415b-1jammy libcephfs-dev=20.2.0-712-g70f8415b-1jammy librados2=20.2.0-712-g70f8415b-1jammy librbd1=20.2.0-712-g70f8415b-1jammy rbd-fuse=20.2.0-712-g70f8415b-1jammy 2026-03-21T12:31:12.053 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-21T12:31:12.251 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-21T12:31:12.251 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-21T12:31:12.435 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-21T12:31:12.435 INFO:teuthology.orchestra.run.vm01.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-21T12:31:12.436 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-21T12:31:12.436 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-21T12:31:12.436 INFO:teuthology.orchestra.run.vm01.stdout:The following additional packages will be installed: 2026-03-21T12:31:12.436 INFO:teuthology.orchestra.run.vm01.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-21T12:31:12.436 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-21T12:31:12.436 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-daemon libcephfs-proxy2 libdouble-conversion3 libfuse2 libjq1 2026-03-21T12:31:12.436 INFO:teuthology.orchestra.run.vm01.stdout: liblttng-ust1 libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-21T12:31:12.437 INFO:teuthology.orchestra.run.vm01.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 librgw2 2026-03-21T12:31:12.437 INFO:teuthology.orchestra.run.vm01.stdout: libsqlite3-mod-ceph libthrift-0.16.0 nvme-cli python-asyncssh-doc 2026-03-21T12:31:12.437 INFO:teuthology.orchestra.run.vm01.stdout: python3-asyncssh python3-cachetools python3-ceph-argparse 2026-03-21T12:31:12.437 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-21T12:31:12.437 INFO:teuthology.orchestra.run.vm01.stdout: python3-iniconfig python3-jaraco.classes python3-jaraco.collections 2026-03-21T12:31:12.437 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.functools python3-jaraco.text python3-joblib 2026-03-21T12:31:12.437 INFO:teuthology.orchestra.run.vm01.stdout: python3-kubernetes python3-natsort python3-pluggy python3-portend 2026-03-21T12:31:12.437 INFO:teuthology.orchestra.run.vm01.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-21T12:31:12.437 INFO:teuthology.orchestra.run.vm01.stdout: python3-pytest python3-repoze.lru python3-requests-oauthlib python3-routes 2026-03-21T12:31:12.437 INFO:teuthology.orchestra.run.vm01.stdout: python3-rsa python3-simplejson python3-sklearn python3-sklearn-lib 2026-03-21T12:31:12.437 INFO:teuthology.orchestra.run.vm01.stdout: python3-tempora python3-threadpoolctl python3-toml python3-wcwidth 2026-03-21T12:31:12.437 INFO:teuthology.orchestra.run.vm01.stdout: python3-webob python3-websocket python3-zc.lockfile qttranslations5-l10n 2026-03-21T12:31:12.437 INFO:teuthology.orchestra.run.vm01.stdout: smartmontools socat xmlstarlet 2026-03-21T12:31:12.438 INFO:teuthology.orchestra.run.vm01.stdout:Suggested packages: 2026-03-21T12:31:12.438 INFO:teuthology.orchestra.run.vm01.stdout: python3-influxdb liblua5.3-dev luarocks python-natsort-doc python-psutil-doc 2026-03-21T12:31:12.438 INFO:teuthology.orchestra.run.vm01.stdout: subversion python-pygments-doc ttf-bitstream-vera python3-paste python3-dap 2026-03-21T12:31:12.438 INFO:teuthology.orchestra.run.vm01.stdout: python-sklearn-doc ipython3 python-webob-doc gsmartcontrol smart-notifier 2026-03-21T12:31:12.438 INFO:teuthology.orchestra.run.vm01.stdout: mailx | mailutils 2026-03-21T12:31:12.438 INFO:teuthology.orchestra.run.vm01.stdout:Recommended packages: 2026-03-21T12:31:12.438 INFO:teuthology.orchestra.run.vm01.stdout: btrfs-tools 2026-03-21T12:31:12.483 INFO:teuthology.orchestra.run.vm01.stdout:The following NEW packages will be installed: 2026-03-21T12:31:12.483 INFO:teuthology.orchestra.run.vm01.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-21T12:31:12.483 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-21T12:31:12.483 INFO:teuthology.orchestra.run.vm01.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-21T12:31:12.483 INFO:teuthology.orchestra.run.vm01.stdout: libcephfs-daemon libcephfs-dev libcephfs-proxy2 libcephfs2 2026-03-21T12:31:12.483 INFO:teuthology.orchestra.run.vm01.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 libnbd0 liboath0 2026-03-21T12:31:12.483 INFO:teuthology.orchestra.run.vm01.stdout: libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-21T12:31:12.484 INFO:teuthology.orchestra.run.vm01.stdout: libradosstriper1 librdkafka1 librgw2 libsqlite3-mod-ceph libthrift-0.16.0 2026-03-21T12:31:12.484 INFO:teuthology.orchestra.run.vm01.stdout: nvme-cli python-asyncssh-doc python3-asyncssh python3-cachetools 2026-03-21T12:31:12.484 INFO:teuthology.orchestra.run.vm01.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-21T12:31:12.484 INFO:teuthology.orchestra.run.vm01.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-21T12:31:12.484 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-21T12:31:12.484 INFO:teuthology.orchestra.run.vm01.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-natsort 2026-03-21T12:31:12.484 INFO:teuthology.orchestra.run.vm01.stdout: python3-pluggy python3-portend python3-prettytable python3-psutil python3-py 2026-03-21T12:31:12.484 INFO:teuthology.orchestra.run.vm01.stdout: python3-pygments python3-pytest python3-rados python3-rbd python3-repoze.lru 2026-03-21T12:31:12.484 INFO:teuthology.orchestra.run.vm01.stdout: python3-requests-oauthlib python3-rgw python3-routes python3-rsa 2026-03-21T12:31:12.484 INFO:teuthology.orchestra.run.vm01.stdout: python3-simplejson python3-sklearn python3-sklearn-lib python3-tempora 2026-03-21T12:31:12.484 INFO:teuthology.orchestra.run.vm01.stdout: python3-threadpoolctl python3-toml python3-wcwidth python3-webob 2026-03-21T12:31:12.484 INFO:teuthology.orchestra.run.vm01.stdout: python3-websocket python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse 2026-03-21T12:31:12.484 INFO:teuthology.orchestra.run.vm01.stdout: smartmontools socat xmlstarlet 2026-03-21T12:31:12.485 INFO:teuthology.orchestra.run.vm01.stdout:The following packages will be upgraded: 2026-03-21T12:31:12.485 INFO:teuthology.orchestra.run.vm01.stdout: librados2 librbd1 2026-03-21T12:31:12.571 INFO:teuthology.orchestra.run.vm01.stdout:2 upgraded, 85 newly installed, 0 to remove and 36 not upgraded. 2026-03-21T12:31:12.571 INFO:teuthology.orchestra.run.vm01.stdout:Need to get 281 MB of archives. 2026-03-21T12:31:12.571 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 1092 MB of additional disk space will be used. 2026-03-21T12:31:12.571 INFO:teuthology.orchestra.run.vm01.stdout:Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-21T12:31:12.752 INFO:teuthology.orchestra.run.vm01.stdout:Get:2 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-21T12:31:12.757 INFO:teuthology.orchestra.run.vm01.stdout:Get:3 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-21T12:31:12.792 INFO:teuthology.orchestra.run.vm01.stdout:Get:4 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-21T12:31:12.895 INFO:teuthology.orchestra.run.vm01.stdout:Get:5 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-21T12:31:12.899 INFO:teuthology.orchestra.run.vm01.stdout:Get:6 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-21T12:31:12.913 INFO:teuthology.orchestra.run.vm01.stdout:Get:7 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-21T12:31:12.917 INFO:teuthology.orchestra.run.vm01.stdout:Get:8 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-21T12:31:12.917 INFO:teuthology.orchestra.run.vm01.stdout:Get:9 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-21T12:31:12.918 INFO:teuthology.orchestra.run.vm01.stdout:Get:10 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-21T12:31:12.918 INFO:teuthology.orchestra.run.vm01.stdout:Get:11 http://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-21T12:31:12.927 INFO:teuthology.orchestra.run.vm01.stdout:Get:12 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-21T12:31:12.927 INFO:teuthology.orchestra.run.vm01.stdout:Get:13 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-21T12:31:12.930 INFO:teuthology.orchestra.run.vm01.stdout:Get:14 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-21T12:31:12.963 INFO:teuthology.orchestra.run.vm01.stdout:Get:15 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-21T12:31:12.964 INFO:teuthology.orchestra.run.vm01.stdout:Get:16 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-21T12:31:12.964 INFO:teuthology.orchestra.run.vm01.stdout:Get:17 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-21T12:31:12.964 INFO:teuthology.orchestra.run.vm01.stdout:Get:18 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-21T12:31:12.965 INFO:teuthology.orchestra.run.vm01.stdout:Get:19 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-21T12:31:12.965 INFO:teuthology.orchestra.run.vm01.stdout:Get:20 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-21T12:31:12.965 INFO:teuthology.orchestra.run.vm01.stdout:Get:21 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-21T12:31:12.967 INFO:teuthology.orchestra.run.vm01.stdout:Get:22 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-21T12:31:12.967 INFO:teuthology.orchestra.run.vm01.stdout:Get:23 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-21T12:31:13.002 INFO:teuthology.orchestra.run.vm01.stdout:Get:24 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-21T12:31:13.004 INFO:teuthology.orchestra.run.vm01.stdout:Get:25 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-21T12:31:13.004 INFO:teuthology.orchestra.run.vm01.stdout:Get:26 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-21T12:31:13.005 INFO:teuthology.orchestra.run.vm01.stdout:Get:27 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-21T12:31:13.071 INFO:teuthology.orchestra.run.vm01.stdout:Get:28 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 20.2.0-712-g70f8415b-1jammy [2867 kB] 2026-03-21T12:31:13.071 INFO:teuthology.orchestra.run.vm01.stdout:Get:29 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-21T12:31:13.073 INFO:teuthology.orchestra.run.vm01.stdout:Get:30 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-21T12:31:13.074 INFO:teuthology.orchestra.run.vm01.stdout:Get:31 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-21T12:31:13.082 INFO:teuthology.orchestra.run.vm01.stdout:Get:32 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-21T12:31:13.082 INFO:teuthology.orchestra.run.vm01.stdout:Get:33 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-21T12:31:13.083 INFO:teuthology.orchestra.run.vm01.stdout:Get:34 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-21T12:31:13.083 INFO:teuthology.orchestra.run.vm01.stdout:Get:35 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-21T12:31:13.083 INFO:teuthology.orchestra.run.vm01.stdout:Get:36 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-21T12:31:13.084 INFO:teuthology.orchestra.run.vm01.stdout:Get:37 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-21T12:31:13.110 INFO:teuthology.orchestra.run.vm01.stdout:Get:38 http://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-21T12:31:13.111 INFO:teuthology.orchestra.run.vm01.stdout:Get:39 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-21T12:31:13.145 INFO:teuthology.orchestra.run.vm01.stdout:Get:40 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-21T12:31:13.146 INFO:teuthology.orchestra.run.vm01.stdout:Get:41 http://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-21T12:31:13.149 INFO:teuthology.orchestra.run.vm01.stdout:Get:42 http://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-21T12:31:13.151 INFO:teuthology.orchestra.run.vm01.stdout:Get:43 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-21T12:31:13.155 INFO:teuthology.orchestra.run.vm01.stdout:Get:44 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-21T12:31:13.158 INFO:teuthology.orchestra.run.vm01.stdout:Get:45 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-21T12:31:13.158 INFO:teuthology.orchestra.run.vm01.stdout:Get:46 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-21T12:31:13.158 INFO:teuthology.orchestra.run.vm01.stdout:Get:47 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-21T12:31:13.207 INFO:teuthology.orchestra.run.vm01.stdout:Get:48 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-21T12:31:13.208 INFO:teuthology.orchestra.run.vm01.stdout:Get:49 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-21T12:31:13.216 INFO:teuthology.orchestra.run.vm01.stdout:Get:50 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-21T12:31:13.218 INFO:teuthology.orchestra.run.vm01.stdout:Get:51 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-21T12:31:13.219 INFO:teuthology.orchestra.run.vm01.stdout:Get:52 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-21T12:31:13.220 INFO:teuthology.orchestra.run.vm01.stdout:Get:53 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-21T12:31:13.220 INFO:teuthology.orchestra.run.vm01.stdout:Get:54 http://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-21T12:31:13.256 INFO:teuthology.orchestra.run.vm01.stdout:Get:55 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-21T12:31:13.870 INFO:teuthology.orchestra.run.vm01.stdout:Get:56 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 20.2.0-712-g70f8415b-1jammy [3583 kB] 2026-03-21T12:31:13.991 INFO:teuthology.orchestra.run.vm01.stdout:Get:57 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 20.2.0-712-g70f8415b-1jammy [829 kB] 2026-03-21T12:31:14.002 INFO:teuthology.orchestra.run.vm01.stdout:Get:58 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 20.2.0-712-g70f8415b-1jammy [364 kB] 2026-03-21T12:31:14.008 INFO:teuthology.orchestra.run.vm01.stdout:Get:59 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 20.2.0-712-g70f8415b-1jammy [32.8 kB] 2026-03-21T12:31:14.008 INFO:teuthology.orchestra.run.vm01.stdout:Get:60 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 20.2.0-712-g70f8415b-1jammy [184 kB] 2026-03-21T12:31:14.011 INFO:teuthology.orchestra.run.vm01.stdout:Get:61 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 20.2.0-712-g70f8415b-1jammy [83.8 kB] 2026-03-21T12:31:14.012 INFO:teuthology.orchestra.run.vm01.stdout:Get:62 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 20.2.0-712-g70f8415b-1jammy [341 kB] 2026-03-21T12:31:14.019 INFO:teuthology.orchestra.run.vm01.stdout:Get:63 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 20.2.0-712-g70f8415b-1jammy [8697 kB] 2026-03-21T12:31:14.356 INFO:teuthology.orchestra.run.vm01.stdout:Get:64 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 20.2.0-712-g70f8415b-1jammy [112 kB] 2026-03-21T12:31:14.356 INFO:teuthology.orchestra.run.vm01.stdout:Get:65 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 20.2.0-712-g70f8415b-1jammy [261 kB] 2026-03-21T12:31:14.360 INFO:teuthology.orchestra.run.vm01.stdout:Get:66 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 20.2.0-712-g70f8415b-1jammy [29.3 MB] 2026-03-21T12:31:15.465 INFO:teuthology.orchestra.run.vm01.stdout:Get:67 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 20.2.0-712-g70f8415b-1jammy [5415 kB] 2026-03-21T12:31:15.639 INFO:teuthology.orchestra.run.vm01.stdout:Get:68 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 20.2.0-712-g70f8415b-1jammy [246 kB] 2026-03-21T12:31:15.641 INFO:teuthology.orchestra.run.vm01.stdout:Get:69 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 20.2.0-712-g70f8415b-1jammy [124 kB] 2026-03-21T12:31:15.642 INFO:teuthology.orchestra.run.vm01.stdout:Get:70 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 20.2.0-712-g70f8415b-1jammy [906 kB] 2026-03-21T12:31:15.705 INFO:teuthology.orchestra.run.vm01.stdout:Get:71 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 20.2.0-712-g70f8415b-1jammy [6399 kB] 2026-03-21T12:31:15.963 INFO:teuthology.orchestra.run.vm01.stdout:Get:72 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 20.2.0-712-g70f8415b-1jammy [21.7 MB] 2026-03-21T12:31:17.034 INFO:teuthology.orchestra.run.vm01.stdout:Get:73 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 20.2.0-712-g70f8415b-1jammy [14.1 kB] 2026-03-21T12:31:17.034 INFO:teuthology.orchestra.run.vm01.stdout:Get:74 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 20.2.0-712-g70f8415b-1jammy [955 kB] 2026-03-21T12:31:17.067 INFO:teuthology.orchestra.run.vm01.stdout:Get:75 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 20.2.0-712-g70f8415b-1jammy [2341 kB] 2026-03-21T12:31:17.344 INFO:teuthology.orchestra.run.vm01.stdout:Get:76 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 20.2.0-712-g70f8415b-1jammy [1049 kB] 2026-03-21T12:31:17.347 INFO:teuthology.orchestra.run.vm01.stdout:Get:77 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 20.2.0-712-g70f8415b-1jammy [179 kB] 2026-03-21T12:31:17.348 INFO:teuthology.orchestra.run.vm01.stdout:Get:78 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 20.2.0-712-g70f8415b-1jammy [45.5 MB] 2026-03-21T12:31:19.539 INFO:teuthology.orchestra.run.vm01.stdout:Get:79 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 20.2.0-712-g70f8415b-1jammy [8625 kB] 2026-03-21T12:31:19.935 INFO:teuthology.orchestra.run.vm01.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 20.2.0-712-g70f8415b-1jammy [14.2 kB] 2026-03-21T12:31:19.935 INFO:teuthology.orchestra.run.vm01.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 20.2.0-712-g70f8415b-1jammy [99.5 MB] 2026-03-21T12:31:24.183 INFO:teuthology.orchestra.run.vm01.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 20.2.0-712-g70f8415b-1jammy [135 kB] 2026-03-21T12:31:24.183 INFO:teuthology.orchestra.run.vm01.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-daemon amd64 20.2.0-712-g70f8415b-1jammy [43.3 kB] 2026-03-21T12:31:24.183 INFO:teuthology.orchestra.run.vm01.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-proxy2 amd64 20.2.0-712-g70f8415b-1jammy [30.7 kB] 2026-03-21T12:31:24.193 INFO:teuthology.orchestra.run.vm01.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 20.2.0-712-g70f8415b-1jammy [41.5 kB] 2026-03-21T12:31:24.198 INFO:teuthology.orchestra.run.vm01.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 20.2.0-712-g70f8415b-1jammy [25.1 MB] 2026-03-21T12:31:25.223 INFO:teuthology.orchestra.run.vm01.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 20.2.0-712-g70f8415b-1jammy [97.9 kB] 2026-03-21T12:31:25.496 INFO:teuthology.orchestra.run.vm01.stdout:Fetched 281 MB in 13s (22.0 MB/s) 2026-03-21T12:31:25.667 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-21T12:31:25.701 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 119262 files and directories currently installed.) 2026-03-21T12:31:25.703 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../00-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-21T12:31:25.705 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-21T12:31:25.723 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-21T12:31:25.729 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../01-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-21T12:31:25.730 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-21T12:31:25.745 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-21T12:31:25.751 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../02-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-21T12:31:25.752 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-21T12:31:25.771 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-21T12:31:25.777 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../03-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-21T12:31:25.781 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-21T12:31:25.826 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-21T12:31:25.832 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../04-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-21T12:31:25.833 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-21T12:31:25.850 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-21T12:31:25.856 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../05-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-21T12:31:25.857 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-21T12:31:25.881 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-21T12:31:25.886 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../06-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-21T12:31:25.887 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-21T12:31:25.910 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../07-librbd1_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:25.912 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librbd1 (20.2.0-712-g70f8415b-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-21T12:31:25.982 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../08-librados2_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:25.984 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librados2 (20.2.0-712-g70f8415b-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-21T12:31:26.046 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libnbd0. 2026-03-21T12:31:26.052 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../09-libnbd0_1.10.5-1_amd64.deb ... 2026-03-21T12:31:26.053 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-21T12:31:26.068 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libcephfs2. 2026-03-21T12:31:26.074 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../10-libcephfs2_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:26.074 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libcephfs2 (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:26.097 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rados. 2026-03-21T12:31:26.103 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../11-python3-rados_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:26.103 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rados (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:26.122 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-21T12:31:26.128 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../12-python3-ceph-argparse_20.2.0-712-g70f8415b-1jammy_all.deb ... 2026-03-21T12:31:26.129 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-ceph-argparse (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:26.145 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cephfs. 2026-03-21T12:31:26.149 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../13-python3-cephfs_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:26.150 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cephfs (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:26.167 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-21T12:31:26.172 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../14-python3-ceph-common_20.2.0-712-g70f8415b-1jammy_all.deb ... 2026-03-21T12:31:26.173 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-ceph-common (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:26.195 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-21T12:31:26.201 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../15-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-21T12:31:26.201 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-21T12:31:26.219 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-prettytable. 2026-03-21T12:31:26.225 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../16-python3-prettytable_2.5.0-2_all.deb ... 2026-03-21T12:31:26.226 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-21T12:31:26.242 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rbd. 2026-03-21T12:31:26.247 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../17-python3-rbd_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:26.248 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rbd (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:26.268 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-21T12:31:26.274 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../18-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-21T12:31:26.275 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-21T12:31:26.297 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package librgw2. 2026-03-21T12:31:26.303 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../19-librgw2_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:26.304 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking librgw2 (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:26.453 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rgw. 2026-03-21T12:31:26.459 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../20-python3-rgw_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:26.459 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rgw (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:26.475 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-21T12:31:26.481 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../21-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-21T12:31:26.481 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-21T12:31:26.495 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libradosstriper1. 2026-03-21T12:31:26.501 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../22-libradosstriper1_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:26.502 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libradosstriper1 (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:26.520 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-common. 2026-03-21T12:31:26.526 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../23-ceph-common_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:26.526 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-common (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:26.950 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-base. 2026-03-21T12:31:26.956 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../24-ceph-base_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:26.963 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-base (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:27.052 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-21T12:31:27.058 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../25-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-21T12:31:27.059 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-21T12:31:27.073 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cheroot. 2026-03-21T12:31:27.080 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../26-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-21T12:31:27.081 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-21T12:31:27.101 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-21T12:31:27.107 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../27-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-21T12:31:27.108 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-21T12:31:27.123 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-21T12:31:27.129 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../28-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-21T12:31:27.130 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-21T12:31:27.145 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-21T12:31:27.151 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../29-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-21T12:31:27.152 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-21T12:31:27.168 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-tempora. 2026-03-21T12:31:27.174 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../30-python3-tempora_4.1.2-1_all.deb ... 2026-03-21T12:31:27.175 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-21T12:31:27.190 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-portend. 2026-03-21T12:31:27.196 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../31-python3-portend_3.0.0-1_all.deb ... 2026-03-21T12:31:27.197 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-21T12:31:27.211 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-21T12:31:27.217 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../32-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-21T12:31:27.218 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-21T12:31:27.233 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-21T12:31:27.238 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../33-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-21T12:31:27.239 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-21T12:31:27.267 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-natsort. 2026-03-21T12:31:27.272 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../34-python3-natsort_8.0.2-1_all.deb ... 2026-03-21T12:31:27.273 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-21T12:31:27.289 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-21T12:31:27.295 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../35-ceph-mgr-modules-core_20.2.0-712-g70f8415b-1jammy_all.deb ... 2026-03-21T12:31:27.296 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-modules-core (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:27.329 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-21T12:31:27.335 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../36-libsqlite3-mod-ceph_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:27.336 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libsqlite3-mod-ceph (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:27.352 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr. 2026-03-21T12:31:27.358 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../37-ceph-mgr_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:27.359 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:27.387 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mon. 2026-03-21T12:31:27.394 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../38-ceph-mon_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:27.395 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mon (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:27.492 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-21T12:31:27.498 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../39-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-21T12:31:27.498 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-21T12:31:27.517 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-osd. 2026-03-21T12:31:27.525 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../40-ceph-osd_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:27.526 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-osd (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:27.805 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph. 2026-03-21T12:31:27.813 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../41-ceph_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:27.814 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:27.830 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-fuse. 2026-03-21T12:31:27.836 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../42-ceph-fuse_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:27.837 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-fuse (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:27.866 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mds. 2026-03-21T12:31:27.873 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../43-ceph-mds_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:27.874 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mds (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:27.920 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package cephadm. 2026-03-21T12:31:27.926 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../44-cephadm_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:27.927 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking cephadm (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:27.947 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-21T12:31:27.954 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../45-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-21T12:31:27.955 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-21T12:31:27.983 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-21T12:31:27.988 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../46-ceph-mgr-cephadm_20.2.0-712-g70f8415b-1jammy_all.deb ... 2026-03-21T12:31:27.989 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-cephadm (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:28.017 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-21T12:31:28.023 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../47-python3-repoze.lru_0.7-2_all.deb ... 2026-03-21T12:31:28.024 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-21T12:31:28.042 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-routes. 2026-03-21T12:31:28.048 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../48-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-21T12:31:28.048 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-21T12:31:28.075 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-21T12:31:28.080 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../49-ceph-mgr-dashboard_20.2.0-712-g70f8415b-1jammy_all.deb ... 2026-03-21T12:31:28.081 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-dashboard (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:28.745 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-21T12:31:28.752 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../50-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-21T12:31:28.753 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-21T12:31:28.810 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-joblib. 2026-03-21T12:31:28.816 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../51-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-21T12:31:28.817 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-21T12:31:28.852 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-21T12:31:28.858 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../52-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-21T12:31:28.858 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-21T12:31:28.875 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-sklearn. 2026-03-21T12:31:28.880 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../53-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-21T12:31:28.881 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-21T12:31:29.004 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-21T12:31:29.011 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../54-ceph-mgr-diskprediction-local_20.2.0-712-g70f8415b-1jammy_all.deb ... 2026-03-21T12:31:29.011 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-diskprediction-local (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:29.273 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-cachetools. 2026-03-21T12:31:29.279 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../55-python3-cachetools_5.0.0-1_all.deb ... 2026-03-21T12:31:29.280 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-21T12:31:29.296 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-rsa. 2026-03-21T12:31:29.303 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../56-python3-rsa_4.8-1_all.deb ... 2026-03-21T12:31:29.304 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-21T12:31:29.323 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-google-auth. 2026-03-21T12:31:29.329 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../57-python3-google-auth_1.5.1-3_all.deb ... 2026-03-21T12:31:29.330 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-21T12:31:29.348 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-21T12:31:29.355 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../58-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-21T12:31:29.355 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-21T12:31:29.373 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-websocket. 2026-03-21T12:31:29.379 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../59-python3-websocket_1.2.3-1_all.deb ... 2026-03-21T12:31:29.380 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-21T12:31:29.399 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-21T12:31:29.405 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../60-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-21T12:31:29.406 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-21T12:31:29.546 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-21T12:31:29.554 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../61-ceph-mgr-k8sevents_20.2.0-712-g70f8415b-1jammy_all.deb ... 2026-03-21T12:31:29.555 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-mgr-k8sevents (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:29.572 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-21T12:31:29.578 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../62-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-21T12:31:29.579 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-21T12:31:29.597 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-21T12:31:29.603 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../63-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-21T12:31:29.604 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-21T12:31:29.618 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package jq. 2026-03-21T12:31:29.624 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../64-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-21T12:31:29.625 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-21T12:31:29.639 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package socat. 2026-03-21T12:31:29.645 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../65-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-21T12:31:29.646 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-21T12:31:29.669 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package xmlstarlet. 2026-03-21T12:31:29.675 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../66-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-21T12:31:29.676 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-21T12:31:29.720 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-test. 2026-03-21T12:31:29.726 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../67-ceph-test_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:29.726 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-test (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:31.212 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package ceph-volume. 2026-03-21T12:31:31.218 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../68-ceph-volume_20.2.0-712-g70f8415b-1jammy_all.deb ... 2026-03-21T12:31:31.219 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking ceph-volume (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:31.245 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libcephfs-daemon. 2026-03-21T12:31:31.252 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../69-libcephfs-daemon_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:31.253 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libcephfs-daemon (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:31.268 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libcephfs-proxy2. 2026-03-21T12:31:31.274 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../70-libcephfs-proxy2_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:31.275 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libcephfs-proxy2 (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:31.289 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-21T12:31:31.296 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../71-libcephfs-dev_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:31.297 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libcephfs-dev (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:31.315 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package nvme-cli. 2026-03-21T12:31:31.322 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../72-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-21T12:31:31.322 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-21T12:31:31.361 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-21T12:31:31.367 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../73-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-21T12:31:31.367 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-21T12:31:31.409 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-21T12:31:31.416 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../74-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-21T12:31:31.416 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-21T12:31:31.432 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pluggy. 2026-03-21T12:31:31.438 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../75-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-21T12:31:31.439 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-21T12:31:31.456 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-psutil. 2026-03-21T12:31:31.462 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../76-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-21T12:31:31.463 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-21T12:31:31.484 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-py. 2026-03-21T12:31:31.490 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../77-python3-py_1.10.0-1_all.deb ... 2026-03-21T12:31:31.491 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-21T12:31:31.514 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pygments. 2026-03-21T12:31:31.520 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../78-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-21T12:31:31.521 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-21T12:31:31.581 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-toml. 2026-03-21T12:31:31.588 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../79-python3-toml_0.10.2-1_all.deb ... 2026-03-21T12:31:31.589 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-21T12:31:31.606 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-pytest. 2026-03-21T12:31:31.612 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../80-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-21T12:31:31.613 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-21T12:31:31.652 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-simplejson. 2026-03-21T12:31:31.660 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../81-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-21T12:31:31.660 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-21T12:31:31.681 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-webob. 2026-03-21T12:31:31.687 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../82-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-21T12:31:31.688 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-21T12:31:31.706 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-21T12:31:31.712 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../83-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-21T12:31:31.712 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-21T12:31:31.813 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package radosgw. 2026-03-21T12:31:31.819 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../84-radosgw_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:31.820 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking radosgw (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:32.201 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package rbd-fuse. 2026-03-21T12:31:32.208 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../85-rbd-fuse_20.2.0-712-g70f8415b-1jammy_amd64.deb ... 2026-03-21T12:31:32.208 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking rbd-fuse (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:32.228 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package smartmontools. 2026-03-21T12:31:32.234 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../86-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-21T12:31:32.243 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-21T12:31:32.284 INFO:teuthology.orchestra.run.vm01.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-21T12:31:32.535 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-21T12:31:32.535 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-21T12:31:32.906 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-21T12:31:32.971 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-21T12:31:32.974 INFO:teuthology.orchestra.run.vm01.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-21T12:31:33.035 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-21T12:31:33.285 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-21T12:31:33.676 INFO:teuthology.orchestra.run.vm01.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-21T12:31:33.692 INFO:teuthology.orchestra.run.vm01.stdout:Setting up cephadm (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:33.735 INFO:teuthology.orchestra.run.vm01.stdout:Adding system user cephadm....done 2026-03-21T12:31:33.745 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-21T12:31:33.809 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-21T12:31:33.811 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-21T12:31:33.875 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-21T12:31:33.943 INFO:teuthology.orchestra.run.vm01.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-21T12:31:33.946 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-21T12:31:34.034 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-21T12:31:34.153 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-21T12:31:34.221 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-21T12:31:34.290 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-ceph-argparse (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:34.362 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-21T12:31:34.365 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-21T12:31:34.367 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-21T12:31:34.370 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-21T12:31:34.372 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-21T12:31:34.492 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-21T12:31:34.561 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libcephfs-proxy2 (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:34.563 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-21T12:31:34.635 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-21T12:31:34.722 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-21T12:31:34.996 INFO:teuthology.orchestra.run.vm01.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-21T12:31:34.998 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-21T12:31:35.091 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-21T12:31:35.227 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-21T12:31:35.319 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-21T12:31:35.384 INFO:teuthology.orchestra.run.vm01.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-21T12:31:35.386 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-ceph-common (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:35.481 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-21T12:31:36.029 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-21T12:31:36.034 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-21T12:31:36.103 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-21T12:31:36.105 INFO:teuthology.orchestra.run.vm01.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-21T12:31:36.107 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-21T12:31:36.175 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-21T12:31:36.239 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-21T12:31:36.241 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-21T12:31:36.312 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-21T12:31:36.381 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-21T12:31:36.458 INFO:teuthology.orchestra.run.vm01.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-21T12:31:36.461 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-21T12:31:36.538 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-21T12:31:36.540 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-21T12:31:36.607 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-21T12:31:36.712 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-21T12:31:36.781 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-21T12:31:36.783 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-21T12:31:36.918 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-21T12:31:36.983 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-21T12:31:36.985 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-21T12:31:37.062 INFO:teuthology.orchestra.run.vm01.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-21T12:31:37.065 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-21T12:31:37.190 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-21T12:31:37.193 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librados2 (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:37.195 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librgw2 (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:37.197 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libsqlite3-mod-ceph (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:37.200 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-21T12:31:37.745 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libcephfs2 (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:37.747 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libradosstriper1 (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:37.749 INFO:teuthology.orchestra.run.vm01.stdout:Setting up librbd1 (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:37.752 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-modules-core (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:37.754 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-fuse (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:37.814 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-21T12:31:37.814 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-21T12:31:38.186 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libcephfs-dev (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:38.188 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rados (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:38.191 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libcephfs-daemon (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:38.193 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rbd (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:38.195 INFO:teuthology.orchestra.run.vm01.stdout:Setting up rbd-fuse (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:38.197 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-rgw (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:38.200 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-cephfs (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:38.202 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-common (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:38.234 INFO:teuthology.orchestra.run.vm01.stdout:Adding group ceph....done 2026-03-21T12:31:38.272 INFO:teuthology.orchestra.run.vm01.stdout:Adding system user ceph....done 2026-03-21T12:31:38.280 INFO:teuthology.orchestra.run.vm01.stdout:Setting system user ceph properties....done 2026-03-21T12:31:38.284 INFO:teuthology.orchestra.run.vm01.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-21T12:31:38.350 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-21T12:31:38.592 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-21T12:31:38.967 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-test (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:38.969 INFO:teuthology.orchestra.run.vm01.stdout:Setting up radosgw (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:39.227 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-21T12:31:39.227 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-21T12:31:39.589 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-base (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:39.678 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-21T12:31:40.039 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mds (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:40.105 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-21T12:31:40.105 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-21T12:31:40.471 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:40.548 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-21T12:31:40.549 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-21T12:31:40.921 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-osd (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:41.001 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-21T12:31:41.001 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-21T12:31:41.363 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-k8sevents (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:41.365 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-diskprediction-local (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:41.378 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mon (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:41.438 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-21T12:31:41.438 INFO:teuthology.orchestra.run.vm01.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-21T12:31:41.810 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-cephadm (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:41.822 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:41.824 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-mgr-dashboard (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:41.836 INFO:teuthology.orchestra.run.vm01.stdout:Setting up ceph-volume (20.2.0-712-g70f8415b-1jammy) ... 2026-03-21T12:31:41.952 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-21T12:31:42.031 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-21T12:31:42.328 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:42.328 INFO:teuthology.orchestra.run.vm01.stdout:Running kernel seems to be up-to-date. 2026-03-21T12:31:42.328 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:42.328 INFO:teuthology.orchestra.run.vm01.stdout:Services to be restarted: 2026-03-21T12:31:42.331 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart apache-htcacheclean.service 2026-03-21T12:31:42.336 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart rsyslog.service 2026-03-21T12:31:42.339 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:42.339 INFO:teuthology.orchestra.run.vm01.stdout:Service restarts being deferred: 2026-03-21T12:31:42.339 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart networkd-dispatcher.service 2026-03-21T12:31:42.339 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart unattended-upgrades.service 2026-03-21T12:31:42.339 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:42.339 INFO:teuthology.orchestra.run.vm01.stdout:No containers need to be restarted. 2026-03-21T12:31:42.340 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:42.340 INFO:teuthology.orchestra.run.vm01.stdout:No user sessions are running outdated binaries. 2026-03-21T12:31:42.340 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:42.340 INFO:teuthology.orchestra.run.vm01.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-21T12:31:43.265 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-21T12:31:43.268 DEBUG:teuthology.orchestra.run.vm01:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-jmespath python3-xmltodict s3cmd 2026-03-21T12:31:43.345 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-21T12:31:43.514 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-21T12:31:43.514 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-21T12:31:43.639 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-21T12:31:43.639 INFO:teuthology.orchestra.run.vm01.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-21T12:31:43.639 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-21T12:31:43.639 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-21T12:31:43.653 INFO:teuthology.orchestra.run.vm01.stdout:The following NEW packages will be installed: 2026-03-21T12:31:43.653 INFO:teuthology.orchestra.run.vm01.stdout: python3-jmespath python3-xmltodict s3cmd 2026-03-21T12:31:43.677 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 3 newly installed, 0 to remove and 36 not upgraded. 2026-03-21T12:31:43.677 INFO:teuthology.orchestra.run.vm01.stdout:Need to get 155 kB of archives. 2026-03-21T12:31:43.677 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 678 kB of additional disk space will be used. 2026-03-21T12:31:43.677 INFO:teuthology.orchestra.run.vm01.stdout:Get:1 http://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-21T12:31:43.693 INFO:teuthology.orchestra.run.vm01.stdout:Get:2 http://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-21T12:31:43.695 INFO:teuthology.orchestra.run.vm01.stdout:Get:3 http://archive.ubuntu.com/ubuntu jammy/universe amd64 s3cmd all 2.2.0-1 [120 kB] 2026-03-21T12:31:43.894 INFO:teuthology.orchestra.run.vm01.stdout:Fetched 155 kB in 0s (2801 kB/s) 2026-03-21T12:31:43.911 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-jmespath. 2026-03-21T12:31:43.938 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 126082 files and directories currently installed.) 2026-03-21T12:31:43.940 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-21T12:31:43.941 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-21T12:31:43.959 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-21T12:31:43.965 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-21T12:31:43.966 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-21T12:31:43.981 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package s3cmd. 2026-03-21T12:31:43.986 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../archives/s3cmd_2.2.0-1_all.deb ... 2026-03-21T12:31:43.986 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking s3cmd (2.2.0-1) ... 2026-03-21T12:31:44.020 INFO:teuthology.orchestra.run.vm01.stdout:Setting up s3cmd (2.2.0-1) ... 2026-03-21T12:31:44.108 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-21T12:31:44.175 INFO:teuthology.orchestra.run.vm01.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-21T12:31:44.247 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-21T12:31:44.557 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:44.557 INFO:teuthology.orchestra.run.vm01.stdout:Running kernel seems to be up-to-date. 2026-03-21T12:31:44.557 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:44.557 INFO:teuthology.orchestra.run.vm01.stdout:Services to be restarted: 2026-03-21T12:31:44.560 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart apache-htcacheclean.service 2026-03-21T12:31:44.565 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart rsyslog.service 2026-03-21T12:31:44.568 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:44.568 INFO:teuthology.orchestra.run.vm01.stdout:Service restarts being deferred: 2026-03-21T12:31:44.568 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart networkd-dispatcher.service 2026-03-21T12:31:44.568 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart unattended-upgrades.service 2026-03-21T12:31:44.568 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:44.568 INFO:teuthology.orchestra.run.vm01.stdout:No containers need to be restarted. 2026-03-21T12:31:44.568 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:44.568 INFO:teuthology.orchestra.run.vm01.stdout:No user sessions are running outdated binaries. 2026-03-21T12:31:44.568 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:44.568 INFO:teuthology.orchestra.run.vm01.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-21T12:31:45.440 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-21T12:31:45.444 DEBUG:teuthology.parallel:result is None 2026-03-21T12:31:45.444 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-21T12:31:46.049 DEBUG:teuthology.orchestra.run.vm01:> dpkg-query -W -f '${Version}' ceph 2026-03-21T12:31:46.058 INFO:teuthology.orchestra.run.vm01.stdout:20.2.0-712-g70f8415b-1jammy 2026-03-21T12:31:46.059 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-712-g70f8415b-1jammy 2026-03-21T12:31:46.059 INFO:teuthology.task.install:The correct ceph version 20.2.0-712-g70f8415b-1jammy is installed. 2026-03-21T12:31:46.059 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-21T12:31:46.059 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:46.059 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-21T12:31:46.111 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-21T12:31:46.111 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:46.111 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/daemon-helper 2026-03-21T12:31:46.161 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-21T12:31:46.212 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-21T12:31:46.212 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:46.212 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-21T12:31:46.264 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-21T12:31:46.312 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-21T12:31:46.312 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:46.312 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/usr/bin/stdin-killer 2026-03-21T12:31:46.360 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-21T12:31:46.412 INFO:teuthology.run_tasks:Running task ceph... 2026-03-21T12:31:46.454 INFO:tasks.ceph:Making ceph log dir writeable by non-root... 2026-03-21T12:31:46.454 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 777 /var/log/ceph 2026-03-21T12:31:46.465 INFO:tasks.ceph:Disabling ceph logrotate... 2026-03-21T12:31:46.465 DEBUG:teuthology.orchestra.run.vm01:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-21T12:31:46.513 INFO:tasks.ceph:Creating extra log directories... 2026-03-21T12:31:46.513 DEBUG:teuthology.orchestra.run.vm01:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-21T12:31:46.564 INFO:tasks.ceph:Creating ceph cluster ceph... 2026-03-21T12:31:46.564 INFO:tasks.ceph:config {'conf': {'client': {'rbd default data pool': 'datapool', 'rbd default features': 1}, 'global': {'mon client directed command retry': 5, 'mon warn on pool no app': False, 'ms inject socket failures': 5000}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'debug rbd': 20}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'bluestore block size': 96636764160, 'bluestore compression algorithm': 'zlib', 'bluestore compression mode': 'aggressive', 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd shutdown pgref assert': True}}, 'fs': 'xfs', 'mkfs_options': None, 'mount_options': None, 'skip_mgr_daemons': False, 'log_ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', '\\(OSD_SLOW_PING_TIME'], 'cpu_profile': set(), 'cluster': 'ceph', 'mon_bind_msgr2': True, 'mon_bind_addrvec': True} 2026-03-21T12:31:46.564 INFO:tasks.ceph:ctx.config {'archive_path': '/archive/kyr-2026-03-20_22:04:26-rbd-tentacle-none-default-vps/3480', 'branch': 'tentacle', 'description': 'rbd/cli/{base/install clusters/{fixed-1} conf/{disable-pool-app} data-pool/replicated features/layering msgr-failures/few objectstore/bluestore-comp-zlib supported-random-distro$/{ubuntu_latest} workloads/rbd_support_module_recovery}', 'email': None, 'first_in_suite': False, 'flavor': 'default', 'job_id': '3480', 'ktype': 'distro', 'last_in_suite': False, 'machine_type': 'vps', 'name': 'kyr-2026-03-20_22:04:26-rbd-tentacle-none-default-vps', 'no_nested_subset': False, 'os_type': 'ubuntu', 'os_version': '22.04', 'overrides': {'admin_socket': {'branch': 'tentacle'}, 'ansible.cephlab': {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}}, 'ceph': {'conf': {'client': {'rbd default data pool': 'datapool', 'rbd default features': 1}, 'global': {'mon client directed command retry': 5, 'mon warn on pool no app': False, 'ms inject socket failures': 5000}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'debug rbd': 20}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'bluestore block size': 96636764160, 'bluestore compression algorithm': 'zlib', 'bluestore compression mode': 'aggressive', 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd shutdown pgref assert': True}}, 'flavor': 'default', 'fs': 'xfs', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', '\\(OSD_SLOW_PING_TIME'], 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'ceph-deploy': {'conf': {'client': {'log file': '/var/log/ceph/ceph-$name.$pid.log'}, 'global': {'osd crush chooseleaf type': 0, 'osd pool default pg num': 128, 'osd pool default pgp num': 128, 'osd pool default size': 2}, 'mon': {}}}, 'cephadm': {'cephadm_binary_url': 'https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm'}, 'install': {'ceph': {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}}, 'thrashosds': {'bdev_inject_crash': 2, 'bdev_inject_crash_probability': 0.5}, 'workunit': {'branch': 'tt-tentacle', 'sha1': '0392f78529848ec72469e8e431875cb98d3a5fb4'}}, 'owner': 'kyr', 'priority': 1000, 'repo': 'https://github.com/ceph/ceph.git', 'roles': [['mon.a', 'mgr.x', 'osd.0', 'osd.1', 'osd.2', 'client.0']], 'seed': 3051, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'sleep_before_teardown': 0, 'subset': '1/128', 'suite': 'rbd', 'suite_branch': 'tt-tentacle', 'suite_path': '/home/teuthos/src/github.com_kshtsk_ceph_0392f78529848ec72469e8e431875cb98d3a5fb4/qa', 'suite_relpath': 'qa', 'suite_repo': 'https://github.com/kshtsk/ceph.git', 'suite_sha1': '0392f78529848ec72469e8e431875cb98d3a5fb4', 'targets': {'vm01.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOvxfiM7gfvyd1cIcNHBELq4poOWY0FLMokV9lmIm6/pi3mdF5tLyx+tYT25K2pNt4FTYP4Xd6Rm0kN0xByPURU='}, 'tasks': [{'internal.check_packages': None}, {'internal.buildpackages_prep': None}, {'internal.save_config': None}, {'internal.check_lock': None}, {'internal.add_remotes': None}, {'console_log': None}, {'internal.connect': None}, {'internal.push_inventory': None}, {'internal.serialize_remote_roles': None}, {'internal.check_conflict': None}, {'internal.check_ceph_data': None}, {'internal.vm_setup': None}, {'internal.base': None}, {'internal.archive_upload': None}, {'internal.archive': None}, {'internal.coredump': None}, {'internal.sudo': None}, {'internal.syslog': None}, {'internal.timer': None}, {'pcp': None}, {'selinux': None}, {'ansible.cephlab': None}, {'clock': None}, {'install': None}, {'ceph': None}, {'exec': {'client.0': ['sudo ceph osd pool create datapool 4', 'rbd pool init datapool']}}, {'install': {'extra_system_packages': ['fio']}}, {'workunit': {'clients': {'client.0': ['rbd/rbd_support_module_recovery.sh']}}}], 'teuthology': {'fragments_dropped': [], 'meta': {}, 'postmerge': []}, 'teuthology_branch': 'clyso-debian-13', 'teuthology_repo': 'https://github.com/clyso/teuthology', 'teuthology_sha1': '1c580df7a9c7c2aadc272da296344fd99f27c444', 'timestamp': '2026-03-20_22:04:26', 'tube': 'vps', 'user': 'kyr', 'verbose': False, 'worker_log': '/home/teuthos/.teuthology/dispatcher/dispatcher.vps.4188345'} 2026-03-21T12:31:46.564 DEBUG:teuthology.orchestra.run.vm01:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-21T12:31:46.605 DEBUG:teuthology.orchestra.run.vm01:> sudo install -d -m0777 -- /var/run/ceph 2026-03-21T12:31:46.659 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:46.659 DEBUG:teuthology.orchestra.run.vm01:> dd if=/scratch_devs of=/dev/stdout 2026-03-21T12:31:46.709 DEBUG:teuthology.misc:devs=['/dev/vg_nvme/lv_1', '/dev/vg_nvme/lv_2', '/dev/vg_nvme/lv_3', '/dev/vg_nvme/lv_4'] 2026-03-21T12:31:46.709 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vg_nvme/lv_1 2026-03-21T12:31:46.753 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vg_nvme/lv_1 -> ../dm-0 2026-03-21T12:31:46.753 INFO:teuthology.orchestra.run.vm01.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-21T12:31:46.753 INFO:teuthology.orchestra.run.vm01.stdout:Device: 5h/5d Inode: 780 Links: 1 2026-03-21T12:31:46.753 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-21T12:31:46.753 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-21 12:30:52.061929000 +0000 2026-03-21T12:31:46.753 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-21 12:30:51.937929000 +0000 2026-03-21T12:31:46.753 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-21 12:30:51.937929000 +0000 2026-03-21T12:31:46.753 INFO:teuthology.orchestra.run.vm01.stdout: Birth: - 2026-03-21T12:31:46.753 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vg_nvme/lv_1 of=/dev/null count=1 2026-03-21T12:31:46.800 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-21T12:31:46.800 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-21T12:31:46.800 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.00013367 s, 3.8 MB/s 2026-03-21T12:31:46.801 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_1 2026-03-21T12:31:46.846 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vg_nvme/lv_2 2026-03-21T12:31:46.889 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vg_nvme/lv_2 -> ../dm-1 2026-03-21T12:31:46.889 INFO:teuthology.orchestra.run.vm01.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-21T12:31:46.889 INFO:teuthology.orchestra.run.vm01.stdout:Device: 5h/5d Inode: 812 Links: 1 2026-03-21T12:31:46.889 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-21T12:31:46.889 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-21 12:30:52.357929000 +0000 2026-03-21T12:31:46.889 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-21 12:30:52.225929000 +0000 2026-03-21T12:31:46.889 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-21 12:30:52.225929000 +0000 2026-03-21T12:31:46.889 INFO:teuthology.orchestra.run.vm01.stdout: Birth: - 2026-03-21T12:31:46.889 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vg_nvme/lv_2 of=/dev/null count=1 2026-03-21T12:31:46.937 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-21T12:31:46.937 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-21T12:31:46.937 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.00012277 s, 4.2 MB/s 2026-03-21T12:31:46.938 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_2 2026-03-21T12:31:46.982 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vg_nvme/lv_3 2026-03-21T12:31:47.025 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vg_nvme/lv_3 -> ../dm-2 2026-03-21T12:31:47.025 INFO:teuthology.orchestra.run.vm01.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-21T12:31:47.025 INFO:teuthology.orchestra.run.vm01.stdout:Device: 5h/5d Inode: 844 Links: 1 2026-03-21T12:31:47.025 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-21T12:31:47.025 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-21 12:30:52.657929000 +0000 2026-03-21T12:31:47.025 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-21 12:30:52.517929000 +0000 2026-03-21T12:31:47.025 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-21 12:30:52.517929000 +0000 2026-03-21T12:31:47.025 INFO:teuthology.orchestra.run.vm01.stdout: Birth: - 2026-03-21T12:31:47.025 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vg_nvme/lv_3 of=/dev/null count=1 2026-03-21T12:31:47.072 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-21T12:31:47.072 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-21T12:31:47.072 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000138218 s, 3.7 MB/s 2026-03-21T12:31:47.073 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_3 2026-03-21T12:31:47.118 DEBUG:teuthology.orchestra.run.vm01:> stat /dev/vg_nvme/lv_4 2026-03-21T12:31:47.161 INFO:teuthology.orchestra.run.vm01.stdout: File: /dev/vg_nvme/lv_4 -> ../dm-3 2026-03-21T12:31:47.161 INFO:teuthology.orchestra.run.vm01.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-21T12:31:47.161 INFO:teuthology.orchestra.run.vm01.stdout:Device: 5h/5d Inode: 876 Links: 1 2026-03-21T12:31:47.161 INFO:teuthology.orchestra.run.vm01.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-21T12:31:47.161 INFO:teuthology.orchestra.run.vm01.stdout:Access: 2026-03-21 12:30:52.809929000 +0000 2026-03-21T12:31:47.161 INFO:teuthology.orchestra.run.vm01.stdout:Modify: 2026-03-21 12:30:52.805929000 +0000 2026-03-21T12:31:47.161 INFO:teuthology.orchestra.run.vm01.stdout:Change: 2026-03-21 12:30:52.805929000 +0000 2026-03-21T12:31:47.161 INFO:teuthology.orchestra.run.vm01.stdout: Birth: - 2026-03-21T12:31:47.161 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/dev/vg_nvme/lv_4 of=/dev/null count=1 2026-03-21T12:31:47.208 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records in 2026-03-21T12:31:47.208 INFO:teuthology.orchestra.run.vm01.stderr:1+0 records out 2026-03-21T12:31:47.208 INFO:teuthology.orchestra.run.vm01.stderr:512 bytes copied, 0.000116578 s, 4.4 MB/s 2026-03-21T12:31:47.209 DEBUG:teuthology.orchestra.run.vm01:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_4 2026-03-21T12:31:47.253 INFO:tasks.ceph:osd dev map: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3'} 2026-03-21T12:31:47.254 INFO:tasks.ceph:remote_to_roles_to_devs: {Remote(name='ubuntu@vm01.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3'}} 2026-03-21T12:31:47.254 INFO:tasks.ceph:Generating config... 2026-03-21T12:31:47.254 INFO:tasks.ceph:[client] rbd default data pool = datapool 2026-03-21T12:31:47.254 INFO:tasks.ceph:[client] rbd default features = 1 2026-03-21T12:31:47.254 INFO:tasks.ceph:[global] mon client directed command retry = 5 2026-03-21T12:31:47.254 INFO:tasks.ceph:[global] mon warn on pool no app = False 2026-03-21T12:31:47.254 INFO:tasks.ceph:[global] ms inject socket failures = 5000 2026-03-21T12:31:47.254 INFO:tasks.ceph:[mgr] debug mgr = 20 2026-03-21T12:31:47.254 INFO:tasks.ceph:[mgr] debug ms = 1 2026-03-21T12:31:47.254 INFO:tasks.ceph:[mgr] debug rbd = 20 2026-03-21T12:31:47.254 INFO:tasks.ceph:[mon] debug mon = 20 2026-03-21T12:31:47.254 INFO:tasks.ceph:[mon] debug ms = 1 2026-03-21T12:31:47.254 INFO:tasks.ceph:[mon] debug paxos = 20 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] bluestore block size = 96636764160 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] bluestore compression algorithm = zlib 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] bluestore compression mode = aggressive 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] bluestore fsck on mount = True 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] debug bluefs = 1/20 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] debug bluestore = 1/20 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] debug ms = 1 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] debug osd = 20 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] debug rocksdb = 4/10 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] mon osd backfillfull_ratio = 0.85 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] mon osd full ratio = 0.9 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] mon osd nearfull ratio = 0.8 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] osd failsafe full ratio = 0.95 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] osd mclock iops capacity threshold hdd = 49000 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] osd objectstore = bluestore 2026-03-21T12:31:47.254 INFO:tasks.ceph:[osd] osd shutdown pgref assert = True 2026-03-21T12:31:47.255 INFO:tasks.ceph:Setting up mon.a... 2026-03-21T12:31:47.255 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring /etc/ceph/ceph.keyring 2026-03-21T12:31:47.310 INFO:teuthology.orchestra.run.vm01.stdout:creating /etc/ceph/ceph.keyring 2026-03-21T12:31:47.312 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=mon. /etc/ceph/ceph.keyring 2026-03-21T12:31:47.371 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-21T12:31:47.421 DEBUG:tasks.ceph:Ceph mon addresses: [('mon.a', '192.168.123.101')] 2026-03-21T12:31:47.421 DEBUG:tasks.ceph:writing out conf {'global': {'chdir': '', 'pid file': '/var/run/ceph/$cluster-$name.pid', 'auth supported': 'cephx', 'filestore xattr use omap': 'true', 'mon clock drift allowed': '1.000', 'osd crush chooseleaf type': '0', 'auth debug': 'true', 'ms die on old message': 'true', 'ms die on bug': 'true', 'mon max pg per osd': '10000', 'mon pg warn max object skew': '0', 'osd_pool_default_pg_autoscale_mode': 'off', 'osd pool default size': '2', 'mon osd allow primary affinity': 'true', 'mon osd allow pg remap': 'true', 'mon warn on legacy crush tunables': 'false', 'mon warn on crush straw calc version zero': 'false', 'mon warn on no sortbitwise': 'false', 'mon warn on osd down out interval zero': 'false', 'mon warn on too few osds': 'false', 'mon_warn_on_pool_pg_num_not_power_of_two': 'false', 'mon_warn_on_pool_no_redundancy': 'false', 'mon_allow_pool_size_one': 'true', 'osd pool default erasure code profile': 'plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd', 'osd default data pool replay window': '5', 'mon allow pool delete': 'true', 'mon cluster log file level': 'debug', 'debug asserts on shutdown': 'true', 'mon health detail to clog': 'false', 'mon host': '192.168.123.101', 'mon client directed command retry': 5, 'mon warn on pool no app': False, 'ms inject socket failures': 5000}, 'osd': {'osd journal size': '100', 'osd scrub load threshold': '5.0', 'osd scrub max interval': '600', 'osd mclock profile': 'high_recovery_ops', 'osd mclock skip benchmark': 'true', 'osd recover clone overlap': 'true', 'osd recovery max chunk': '1048576', 'osd debug shutdown': 'true', 'osd debug op order': 'true', 'osd debug verify stray on activate': 'true', 'osd debug trim objects': 'true', 'osd open classes on start': 'true', 'osd debug pg log writeout': 'true', 'osd deep scrub update digest min age': '30', 'osd map max advance': '10', 'journal zero on create': 'true', 'filestore ondisk finisher threads': '3', 'filestore apply finisher threads': '3', 'bdev debug aio': 'true', 'osd debug misdirected ops': 'true', 'bluestore block size': 96636764160, 'bluestore compression algorithm': 'zlib', 'bluestore compression mode': 'aggressive', 'bluestore fsck on mount': True, 'debug bluefs': '1/20', 'debug bluestore': '1/20', 'debug ms': 1, 'debug osd': 20, 'debug rocksdb': '4/10', 'mon osd backfillfull_ratio': 0.85, 'mon osd full ratio': 0.9, 'mon osd nearfull ratio': 0.8, 'osd failsafe full ratio': 0.95, 'osd mclock iops capacity threshold hdd': 49000, 'osd objectstore': 'bluestore', 'osd shutdown pgref assert': True}, 'mgr': {'debug ms': 1, 'debug mgr': 20, 'debug mon': '20', 'debug auth': '20', 'mon reweight min pgs per osd': '4', 'mon reweight min bytes per osd': '10', 'mgr/telemetry/nag': 'false', 'debug rbd': 20}, 'mon': {'debug ms': 1, 'debug mon': 20, 'debug paxos': 20, 'debug auth': '20', 'mon data avail warn': '5', 'mon mgr mkfs grace': '240', 'mon reweight min pgs per osd': '4', 'mon osd reporter subtree level': 'osd', 'mon osd prime pg temp': 'true', 'mon reweight min bytes per osd': '10', 'auth mon ticket ttl': '660', 'auth service ticket ttl': '240', 'mon_warn_on_insecure_global_id_reclaim': 'false', 'mon_warn_on_insecure_global_id_reclaim_allowed': 'false', 'mon_down_mkfs_grace': '2m', 'mon_warn_on_filestore_osds': 'false'}, 'client': {'rgw cache enabled': 'true', 'rgw enable ops log': 'true', 'rgw enable usage log': 'true', 'log file': '/var/log/ceph/$cluster-$name.$pid.log', 'admin socket': '/var/run/ceph/$cluster-$name.$pid.asok', 'rbd default data pool': 'datapool', 'rbd default features': 1}, 'mon.a': {}} 2026-03-21T12:31:47.421 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:47.421 DEBUG:teuthology.orchestra.run.vm01:> dd of=/home/ubuntu/cephtest/ceph.tmp.conf 2026-03-21T12:31:47.465 DEBUG:teuthology.orchestra.run.vm01:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage monmaptool -c /home/ubuntu/cephtest/ceph.tmp.conf --create --clobber --enable-all-features --add a 192.168.123.101 --print /home/ubuntu/cephtest/ceph.monmap 2026-03-21T12:31:47.522 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool: monmap file /home/ubuntu/cephtest/ceph.monmap 2026-03-21T12:31:47.523 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool: generated fsid 2056cbb5-2007-4290-89d5-61be1cdf6e81 2026-03-21T12:31:47.523 INFO:teuthology.orchestra.run.vm01.stdout:setting min_mon_release = tentacle 2026-03-21T12:31:47.523 INFO:teuthology.orchestra.run.vm01.stdout:epoch 0 2026-03-21T12:31:47.523 INFO:teuthology.orchestra.run.vm01.stdout:fsid 2056cbb5-2007-4290-89d5-61be1cdf6e81 2026-03-21T12:31:47.523 INFO:teuthology.orchestra.run.vm01.stdout:last_changed 2026-03-21T12:31:47.523307+0000 2026-03-21T12:31:47.523 INFO:teuthology.orchestra.run.vm01.stdout:created 2026-03-21T12:31:47.523307+0000 2026-03-21T12:31:47.523 INFO:teuthology.orchestra.run.vm01.stdout:min_mon_release 20 (tentacle) 2026-03-21T12:31:47.523 INFO:teuthology.orchestra.run.vm01.stdout:election_strategy: 1 2026-03-21T12:31:47.523 INFO:teuthology.orchestra.run.vm01.stdout:0: [v2:192.168.123.101:3300/0,v1:192.168.123.101:6789/0] mon.a 2026-03-21T12:31:47.523 INFO:teuthology.orchestra.run.vm01.stdout:monmaptool: writing epoch 0 to /home/ubuntu/cephtest/ceph.monmap (1 monitors) 2026-03-21T12:31:47.525 DEBUG:teuthology.orchestra.run.vm01:> rm -- /home/ubuntu/cephtest/ceph.tmp.conf 2026-03-21T12:31:47.569 INFO:tasks.ceph:Writing /etc/ceph/ceph.conf for FSID 2056cbb5-2007-4290-89d5-61be1cdf6e81... 2026-03-21T12:31:47.569 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout:[global] 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: chdir = "" 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: auth supported = cephx 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: filestore xattr use omap = true 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: mon clock drift allowed = 1.000 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: osd crush chooseleaf type = 0 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: auth debug = true 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: ms die on old message = true 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: ms die on bug = true 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: mon pg warn max object skew = 0 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: # disable pg_autoscaler by default for new pools 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: osd pool default size = 2 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: mon osd allow primary affinity = true 2026-03-21T12:31:47.628 INFO:teuthology.orchestra.run.vm01.stdout: mon osd allow pg remap = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on legacy crush tunables = false 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on crush straw calc version zero = false 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on no sortbitwise = false 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on osd down out interval zero = false 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on too few osds = false 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon_allow_pool_size_one = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd default data pool replay window = 5 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon allow pool delete = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon cluster log file level = debug 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: debug asserts on shutdown = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon health detail to clog = false 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon host = 192.168.123.101 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon client directed command retry = 5 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: mon warn on pool no app = False 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: ms inject socket failures = 5000 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: fsid = 2056cbb5-2007-4290-89d5-61be1cdf6e81 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout:[osd] 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd journal size = 100 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd scrub load threshold = 5.0 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd scrub max interval = 600 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd mclock profile = high_recovery_ops 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd mclock skip benchmark = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd recover clone overlap = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd recovery max chunk = 1048576 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd debug shutdown = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd debug op order = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd debug verify stray on activate = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd debug trim objects = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd open classes on start = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd debug pg log writeout = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd deep scrub update digest min age = 30 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd map max advance = 10 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: journal zero on create = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: filestore ondisk finisher threads = 3 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: filestore apply finisher threads = 3 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: bdev debug aio = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: osd debug misdirected ops = true 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: bluestore block size = 96636764160 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: bluestore compression algorithm = zlib 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: bluestore compression mode = aggressive 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: bluestore fsck on mount = True 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: debug bluefs = 1/20 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: debug bluestore = 1/20 2026-03-21T12:31:47.629 INFO:teuthology.orchestra.run.vm01.stdout: debug ms = 1 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: debug osd = 20 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: debug rocksdb = 4/10 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon osd backfillfull_ratio = 0.85 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon osd full ratio = 0.9 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon osd nearfull ratio = 0.8 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: osd failsafe full ratio = 0.95 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: osd objectstore = bluestore 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: osd shutdown pgref assert = True 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout:[mgr] 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: debug ms = 1 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: debug mgr = 20 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: debug mon = 20 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: debug auth = 20 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon reweight min pgs per osd = 4 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon reweight min bytes per osd = 10 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mgr/telemetry/nag = false 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: debug rbd = 20 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout:[mon] 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: debug ms = 1 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: debug mon = 20 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: debug paxos = 20 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: debug auth = 20 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon data avail warn = 5 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon mgr mkfs grace = 240 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon reweight min pgs per osd = 4 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon osd reporter subtree level = osd 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon osd prime pg temp = true 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon reweight min bytes per osd = 10 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: auth mon ticket ttl = 660 # 11m 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: auth service ticket ttl = 240 # 4m 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: # don't complain about insecure global_id in the test suite 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: # 1m isn't quite enough 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon_down_mkfs_grace = 2m 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: mon_warn_on_filestore_osds = false 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout:[client] 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: rgw cache enabled = true 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: rgw enable ops log = true 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: rgw enable usage log = true 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: rbd default data pool = datapool 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout: rbd default features = 1 2026-03-21T12:31:47.630 INFO:teuthology.orchestra.run.vm01.stdout:[mon.a] 2026-03-21T12:31:47.635 INFO:tasks.ceph:Creating admin key on mon.a... 2026-03-21T12:31:47.635 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /etc/ceph/ceph.keyring 2026-03-21T12:31:47.701 INFO:tasks.ceph:Copying monmap to all nodes... 2026-03-21T12:31:47.701 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:47.701 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.keyring of=/dev/stdout 2026-03-21T12:31:47.745 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:47.745 DEBUG:teuthology.orchestra.run.vm01:> dd if=/home/ubuntu/cephtest/ceph.monmap of=/dev/stdout 2026-03-21T12:31:47.789 INFO:tasks.ceph:Sending monmap to node ubuntu@vm01.local 2026-03-21T12:31:47.789 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:47.789 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-21T12:31:47.789 DEBUG:teuthology.orchestra.run.vm01:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-21T12:31:47.845 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:47.845 DEBUG:teuthology.orchestra.run.vm01:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-21T12:31:47.889 INFO:tasks.ceph:Setting up mon nodes... 2026-03-21T12:31:47.889 INFO:tasks.ceph:Setting up mgr nodes... 2026-03-21T12:31:47.889 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /var/lib/ceph/mgr/ceph-x && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=mgr.x /var/lib/ceph/mgr/ceph-x/keyring 2026-03-21T12:31:47.951 INFO:teuthology.orchestra.run.vm01.stdout:creating /var/lib/ceph/mgr/ceph-x/keyring 2026-03-21T12:31:47.953 INFO:tasks.ceph:Setting up mds nodes... 2026-03-21T12:31:47.953 INFO:tasks.ceph_client:Setting up client nodes... 2026-03-21T12:31:47.953 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=client.0 /etc/ceph/ceph.client.0.keyring && sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-21T12:31:48.014 INFO:teuthology.orchestra.run.vm01.stdout:creating /etc/ceph/ceph.client.0.keyring 2026-03-21T12:31:48.021 INFO:tasks.ceph:Running mkfs on osd nodes... 2026-03-21T12:31:48.021 INFO:tasks.ceph:ctx.disk_config.remote_to_roles_to_dev: {Remote(name='ubuntu@vm01.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3'}} 2026-03-21T12:31:48.021 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /var/lib/ceph/osd/ceph-0 2026-03-21T12:31:48.069 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3'} 2026-03-21T12:31:48.069 INFO:tasks.ceph:role: osd.0 2026-03-21T12:31:48.069 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_1 on ubuntu@vm01.local 2026-03-21T12:31:48.069 DEBUG:teuthology.orchestra.run.vm01:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_1 2026-03-21T12:31:48.118 INFO:teuthology.orchestra.run.vm01.stdout:meta-data=/dev/vg_nvme/lv_1 isize=2048 agcount=4, agsize=1310464 blks 2026-03-21T12:31:48.118 INFO:teuthology.orchestra.run.vm01.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-21T12:31:48.118 INFO:teuthology.orchestra.run.vm01.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-21T12:31:48.118 INFO:teuthology.orchestra.run.vm01.stdout: = reflink=1 bigtime=0 inobtcount=0 2026-03-21T12:31:48.118 INFO:teuthology.orchestra.run.vm01.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-21T12:31:48.118 INFO:teuthology.orchestra.run.vm01.stdout: = sunit=0 swidth=0 blks 2026-03-21T12:31:48.118 INFO:teuthology.orchestra.run.vm01.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-21T12:31:48.118 INFO:teuthology.orchestra.run.vm01.stdout:log =internal log bsize=4096 blocks=2560, version=2 2026-03-21T12:31:48.118 INFO:teuthology.orchestra.run.vm01.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-21T12:31:48.118 INFO:teuthology.orchestra.run.vm01.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-21T12:31:48.122 INFO:teuthology.orchestra.run.vm01.stdout:Discarding blocks...Done. 2026-03-21T12:31:48.123 INFO:tasks.ceph:mount /dev/vg_nvme/lv_1 on ubuntu@vm01.local -o noatime 2026-03-21T12:31:48.123 DEBUG:teuthology.orchestra.run.vm01:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_1 /var/lib/ceph/osd/ceph-0 2026-03-21T12:31:48.210 DEBUG:teuthology.orchestra.run.vm01:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-0 2026-03-21T12:31:48.256 INFO:teuthology.orchestra.run.vm01.stderr:sudo: /sbin/restorecon: command not found 2026-03-21T12:31:48.256 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-21T12:31:48.256 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /var/lib/ceph/osd/ceph-1 2026-03-21T12:31:48.305 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3'} 2026-03-21T12:31:48.305 INFO:tasks.ceph:role: osd.1 2026-03-21T12:31:48.305 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_2 on ubuntu@vm01.local 2026-03-21T12:31:48.305 DEBUG:teuthology.orchestra.run.vm01:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2 2026-03-21T12:31:48.353 INFO:teuthology.orchestra.run.vm01.stdout:meta-data=/dev/vg_nvme/lv_2 isize=2048 agcount=4, agsize=1310464 blks 2026-03-21T12:31:48.353 INFO:teuthology.orchestra.run.vm01.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-21T12:31:48.353 INFO:teuthology.orchestra.run.vm01.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-21T12:31:48.353 INFO:teuthology.orchestra.run.vm01.stdout: = reflink=1 bigtime=0 inobtcount=0 2026-03-21T12:31:48.353 INFO:teuthology.orchestra.run.vm01.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-21T12:31:48.353 INFO:teuthology.orchestra.run.vm01.stdout: = sunit=0 swidth=0 blks 2026-03-21T12:31:48.353 INFO:teuthology.orchestra.run.vm01.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-21T12:31:48.353 INFO:teuthology.orchestra.run.vm01.stdout:log =internal log bsize=4096 blocks=2560, version=2 2026-03-21T12:31:48.353 INFO:teuthology.orchestra.run.vm01.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-21T12:31:48.353 INFO:teuthology.orchestra.run.vm01.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-21T12:31:48.358 INFO:teuthology.orchestra.run.vm01.stdout:Discarding blocks...Done. 2026-03-21T12:31:48.359 INFO:tasks.ceph:mount /dev/vg_nvme/lv_2 on ubuntu@vm01.local -o noatime 2026-03-21T12:31:48.359 DEBUG:teuthology.orchestra.run.vm01:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_2 /var/lib/ceph/osd/ceph-1 2026-03-21T12:31:48.412 DEBUG:teuthology.orchestra.run.vm01:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-1 2026-03-21T12:31:48.460 INFO:teuthology.orchestra.run.vm01.stderr:sudo: /sbin/restorecon: command not found 2026-03-21T12:31:48.460 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-21T12:31:48.460 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /var/lib/ceph/osd/ceph-2 2026-03-21T12:31:48.511 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3'} 2026-03-21T12:31:48.511 INFO:tasks.ceph:role: osd.2 2026-03-21T12:31:48.511 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_3 on ubuntu@vm01.local 2026-03-21T12:31:48.511 DEBUG:teuthology.orchestra.run.vm01:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_3 2026-03-21T12:31:48.561 INFO:teuthology.orchestra.run.vm01.stdout:meta-data=/dev/vg_nvme/lv_3 isize=2048 agcount=4, agsize=1310464 blks 2026-03-21T12:31:48.561 INFO:teuthology.orchestra.run.vm01.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-21T12:31:48.561 INFO:teuthology.orchestra.run.vm01.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-21T12:31:48.561 INFO:teuthology.orchestra.run.vm01.stdout: = reflink=1 bigtime=0 inobtcount=0 2026-03-21T12:31:48.561 INFO:teuthology.orchestra.run.vm01.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-21T12:31:48.561 INFO:teuthology.orchestra.run.vm01.stdout: = sunit=0 swidth=0 blks 2026-03-21T12:31:48.561 INFO:teuthology.orchestra.run.vm01.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-21T12:31:48.561 INFO:teuthology.orchestra.run.vm01.stdout:log =internal log bsize=4096 blocks=2560, version=2 2026-03-21T12:31:48.561 INFO:teuthology.orchestra.run.vm01.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-21T12:31:48.561 INFO:teuthology.orchestra.run.vm01.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-21T12:31:48.565 INFO:teuthology.orchestra.run.vm01.stdout:Discarding blocks...Done. 2026-03-21T12:31:48.566 INFO:tasks.ceph:mount /dev/vg_nvme/lv_3 on ubuntu@vm01.local -o noatime 2026-03-21T12:31:48.566 DEBUG:teuthology.orchestra.run.vm01:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_3 /var/lib/ceph/osd/ceph-2 2026-03-21T12:31:48.616 DEBUG:teuthology.orchestra.run.vm01:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-2 2026-03-21T12:31:48.664 INFO:teuthology.orchestra.run.vm01.stderr:sudo: /sbin/restorecon: command not found 2026-03-21T12:31:48.664 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-21T12:31:48.664 DEBUG:teuthology.orchestra.run.vm01:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-21T12:31:48.727 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T12:31:48.725+0000 7ff1b75dea40 -1 auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory 2026-03-21T12:31:48.727 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T12:31:48.725+0000 7ff1b75dea40 -1 created new key in keyring /var/lib/ceph/osd/ceph-0/keyring 2026-03-21T12:31:48.727 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T12:31:48.725+0000 7ff1b75dea40 -1 bdev(0x56424dbc1800 /var/lib/ceph/osd/ceph-0/block) open stat got: (1) Operation not permitted 2026-03-21T12:31:48.727 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T12:31:48.725+0000 7ff1b75dea40 -1 bluestore(/var/lib/ceph/osd/ceph-0) _read_fsid unparsable uuid 2026-03-21T12:31:49.558 DEBUG:teuthology.orchestra.run.vm01:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-21T12:31:49.606 DEBUG:teuthology.orchestra.run.vm01:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 1 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-21T12:31:49.672 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T12:31:49.669+0000 7f71c2d6ca40 -1 auth: error reading file: /var/lib/ceph/osd/ceph-1/keyring: can't open /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2026-03-21T12:31:49.672 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T12:31:49.669+0000 7f71c2d6ca40 -1 created new key in keyring /var/lib/ceph/osd/ceph-1/keyring 2026-03-21T12:31:49.672 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T12:31:49.669+0000 7f71c2d6ca40 -1 bdev(0x55d21e01b800 /var/lib/ceph/osd/ceph-1/block) open stat got: (1) Operation not permitted 2026-03-21T12:31:49.672 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T12:31:49.669+0000 7f71c2d6ca40 -1 bluestore(/var/lib/ceph/osd/ceph-1) _read_fsid unparsable uuid 2026-03-21T12:31:50.598 DEBUG:teuthology.orchestra.run.vm01:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-21T12:31:50.647 DEBUG:teuthology.orchestra.run.vm01:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 2 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-21T12:31:50.712 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T12:31:50.709+0000 7f483b1caa40 -1 auth: error reading file: /var/lib/ceph/osd/ceph-2/keyring: can't open /var/lib/ceph/osd/ceph-2/keyring: (2) No such file or directory 2026-03-21T12:31:50.712 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T12:31:50.709+0000 7f483b1caa40 -1 created new key in keyring /var/lib/ceph/osd/ceph-2/keyring 2026-03-21T12:31:50.713 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T12:31:50.709+0000 7f483b1caa40 -1 bdev(0x55702bdf5800 /var/lib/ceph/osd/ceph-2/block) open stat got: (1) Operation not permitted 2026-03-21T12:31:50.713 INFO:teuthology.orchestra.run.vm01.stderr:2026-03-21T12:31:50.709+0000 7f483b1caa40 -1 bluestore(/var/lib/ceph/osd/ceph-2) _read_fsid unparsable uuid 2026-03-21T12:31:51.582 DEBUG:teuthology.orchestra.run.vm01:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-21T12:31:51.631 INFO:tasks.ceph:Reading keys from all nodes... 2026-03-21T12:31:51.631 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:51.631 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/mgr/ceph-x/keyring of=/dev/stdout 2026-03-21T12:31:51.682 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:51.682 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/osd/ceph-0/keyring of=/dev/stdout 2026-03-21T12:31:51.731 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:51.731 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/osd/ceph-1/keyring of=/dev/stdout 2026-03-21T12:31:51.782 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:51.782 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/osd/ceph-2/keyring of=/dev/stdout 2026-03-21T12:31:51.831 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:51.831 DEBUG:teuthology.orchestra.run.vm01:> dd if=/etc/ceph/ceph.client.0.keyring of=/dev/stdout 2026-03-21T12:31:51.877 INFO:tasks.ceph:Adding keys to all mons... 2026-03-21T12:31:51.877 DEBUG:teuthology.orchestra.run.vm01:> sudo tee -a /etc/ceph/ceph.keyring 2026-03-21T12:31:51.924 INFO:teuthology.orchestra.run.vm01.stdout:[mgr.x] 2026-03-21T12:31:51.924 INFO:teuthology.orchestra.run.vm01.stdout: key = AQCzj75pjz27OBAAnlJabM7WzH0pxL7YBAMabw== 2026-03-21T12:31:51.924 INFO:teuthology.orchestra.run.vm01.stdout:[osd.0] 2026-03-21T12:31:51.924 INFO:teuthology.orchestra.run.vm01.stdout: key = AQC0j75p+zlfKxAAmyr6ywSheaaVqWo0RqKPlg== 2026-03-21T12:31:51.924 INFO:teuthology.orchestra.run.vm01.stdout:[osd.1] 2026-03-21T12:31:51.924 INFO:teuthology.orchestra.run.vm01.stdout: key = AQC1j75pW4UUKBAAgH/eSNz+9p6bA4FnpQfXwg== 2026-03-21T12:31:51.924 INFO:teuthology.orchestra.run.vm01.stdout:[osd.2] 2026-03-21T12:31:51.924 INFO:teuthology.orchestra.run.vm01.stdout: key = AQC2j75pbft/KhAAjMD4TRAJqoY6TcCMtREpiA== 2026-03-21T12:31:51.925 INFO:teuthology.orchestra.run.vm01.stdout:[client.0] 2026-03-21T12:31:51.925 INFO:teuthology.orchestra.run.vm01.stdout: key = AQC0j75p+yTnABAAJvQ2jFGzLB/bM7APGzCYqw== 2026-03-21T12:31:51.925 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.x --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-21T12:31:51.988 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.0 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-21T12:31:52.053 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.1 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-21T12:31:52.116 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.2 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-21T12:31:52.178 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.0 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-21T12:31:52.240 INFO:tasks.ceph:Running mkfs on mon nodes... 2026-03-21T12:31:52.241 DEBUG:teuthology.orchestra.run.vm01:> sudo mkdir -p /var/lib/ceph/mon/ceph-a 2026-03-21T12:31:52.289 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i a --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-03-21T12:31:52.366 DEBUG:teuthology.orchestra.run.vm01:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-a 2026-03-21T12:31:52.415 DEBUG:teuthology.orchestra.run.vm01:> rm -- /home/ubuntu/cephtest/ceph.monmap 2026-03-21T12:31:52.461 INFO:tasks.ceph:Starting mon daemons in cluster ceph... 2026-03-21T12:31:52.461 INFO:tasks.ceph.mon.a:Restarting daemon 2026-03-21T12:31:52.461 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i a 2026-03-21T12:31:52.507 INFO:tasks.ceph.mon.a:Started 2026-03-21T12:31:52.507 INFO:tasks.ceph:Starting mgr daemons in cluster ceph... 2026-03-21T12:31:52.507 INFO:tasks.ceph.mgr.x:Restarting daemon 2026-03-21T12:31:52.507 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i x 2026-03-21T12:31:52.508 INFO:tasks.ceph.mgr.x:Started 2026-03-21T12:31:52.508 DEBUG:tasks.ceph:set 0 configs 2026-03-21T12:31:52.508 DEBUG:teuthology.orchestra.run.vm01:> sudo ceph --cluster ceph config dump 2026-03-21T12:31:52.619 INFO:teuthology.orchestra.run.vm01.stdout:WHO MASK LEVEL OPTION VALUE RO 2026-03-21T12:31:52.632 INFO:tasks.ceph:Setting crush tunables to default 2026-03-21T12:31:52.632 DEBUG:teuthology.orchestra.run.vm01:> sudo ceph --cluster ceph osd crush tunables default 2026-03-21T12:31:52.740 INFO:teuthology.orchestra.run.vm01.stderr:adjusted tunables profile to default 2026-03-21T12:31:52.756 INFO:tasks.ceph:check_enable_crimson: False 2026-03-21T12:31:52.756 INFO:tasks.ceph:Starting osd daemons in cluster ceph... 2026-03-21T12:31:52.756 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:52.756 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/osd/ceph-0/fsid of=/dev/stdout 2026-03-21T12:31:52.763 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:52.763 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/osd/ceph-1/fsid of=/dev/stdout 2026-03-21T12:31:52.819 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:31:52.819 DEBUG:teuthology.orchestra.run.vm01:> sudo dd if=/var/lib/ceph/osd/ceph-2/fsid of=/dev/stdout 2026-03-21T12:31:52.871 DEBUG:teuthology.orchestra.run.vm01:> sudo ceph --cluster ceph osd new bee261fd-0812-4d61-8933-bbb968691bc0 0 2026-03-21T12:31:53.024 INFO:teuthology.orchestra.run.vm01.stdout:0 2026-03-21T12:31:53.037 DEBUG:teuthology.orchestra.run.vm01:> sudo ceph --cluster ceph osd new f46449e1-d22f-4b12-a276-a42ee51c97de 1 2026-03-21T12:31:53.149 INFO:teuthology.orchestra.run.vm01.stdout:1 2026-03-21T12:31:53.162 DEBUG:teuthology.orchestra.run.vm01:> sudo ceph --cluster ceph osd new 2046d027-e1a7-4f9b-a4f4-771aa1f54b2e 2 2026-03-21T12:31:53.275 INFO:teuthology.orchestra.run.vm01.stdout:2 2026-03-21T12:31:53.288 INFO:tasks.ceph.osd.0:Restarting daemon 2026-03-21T12:31:53.288 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0 2026-03-21T12:31:53.288 INFO:tasks.ceph.osd.0:Started 2026-03-21T12:31:53.288 INFO:tasks.ceph.osd.1:Restarting daemon 2026-03-21T12:31:53.288 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1 2026-03-21T12:31:53.289 INFO:tasks.ceph.osd.1:Started 2026-03-21T12:31:53.289 INFO:tasks.ceph.osd.2:Restarting daemon 2026-03-21T12:31:53.289 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2 2026-03-21T12:31:53.290 INFO:tasks.ceph.osd.2:Started 2026-03-21T12:31:53.290 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-21T12:31:53.411 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:31:53.411 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":5,"fsid":"2056cbb5-2007-4290-89d5-61be1cdf6e81","created":"2026-03-21T12:31:52.568842+0000","modified":"2026-03-21T12:31:53.273830+0000","last_up_change":"0.000000","last_in_change":"2026-03-21T12:31:53.273830+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"bee261fd-0812-4d61-8933-bbb968691bc0","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"f46449e1-d22f-4b12-a276-a42ee51c97de","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"2046d027-e1a7-4f9b-a4f4-771aa1f54b2e","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-21T12:31:53.424 INFO:tasks.ceph.ceph_manager.ceph:[] 2026-03-21T12:31:53.424 INFO:tasks.ceph:Waiting for OSDs to come up 2026-03-21T12:31:53.592 INFO:tasks.ceph.osd.2.vm01.stderr:2026-03-21T12:31:53.589+0000 7fb180048a40 -1 Falling back to public interface 2026-03-21T12:31:53.691 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:31:53.689+0000 7f5cebfb6a40 -1 Falling back to public interface 2026-03-21T12:31:53.719 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T12:31:53.717+0000 7f4e6a71ea40 -1 Falling back to public interface 2026-03-21T12:31:53.726 DEBUG:teuthology.orchestra.run.vm01:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-03-21T12:31:53.827 INFO:teuthology.misc.health.vm01.stdout: 2026-03-21T12:31:53.827 INFO:teuthology.misc.health.vm01.stdout:{"epoch":5,"fsid":"2056cbb5-2007-4290-89d5-61be1cdf6e81","created":"2026-03-21T12:31:52.568842+0000","modified":"2026-03-21T12:31:53.273830+0000","last_up_change":"0.000000","last_in_change":"2026-03-21T12:31:53.273830+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"bee261fd-0812-4d61-8933-bbb968691bc0","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"f46449e1-d22f-4b12-a276-a42ee51c97de","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"2046d027-e1a7-4f9b-a4f4-771aa1f54b2e","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-21T12:31:53.839 DEBUG:teuthology.misc:0 of 3 OSDs are up 2026-03-21T12:31:53.864 INFO:tasks.ceph.mgr.x.vm01.stderr:/usr/lib/python3/dist-packages/scipy/__init__.py:67: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-21T12:31:53.864 INFO:tasks.ceph.mgr.x.vm01.stderr:Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-21T12:31:53.864 INFO:tasks.ceph.mgr.x.vm01.stderr: from numpy import show_config as show_numpy_config 2026-03-21T12:31:54.221 INFO:tasks.ceph.osd.2.vm01.stderr:2026-03-21T12:31:54.217+0000 7fb180048a40 -1 osd.2 0 log_to_monitors true 2026-03-21T12:31:54.298 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:31:54.293+0000 7f5cebfb6a40 -1 osd.1 0 log_to_monitors true 2026-03-21T12:31:54.392 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T12:31:54.389+0000 7f4e6a71ea40 -1 osd.0 0 log_to_monitors true 2026-03-21T12:31:54.517 INFO:tasks.ceph.mgr.x.vm01.stderr:Failed to import NVMeoFClient and related components: cannot import name 'NVMeoFClient' from 'dashboard.services.nvmeof_client' (/usr/share/ceph/mgr/dashboard/services/nvmeof_client.py) 2026-03-21T12:31:55.603 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T12:31:55.601+0000 7f4e666c7640 -1 osd.0 0 waiting for initial osdmap 2026-03-21T12:31:55.603 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:31:55.601+0000 7f5ce8771640 -1 osd.1 0 waiting for initial osdmap 2026-03-21T12:31:55.606 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T12:31:55.601+0000 7f4e614d5640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T12:31:55.607 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:31:55.605+0000 7f5ce2d6d640 -1 osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T12:31:55.611 INFO:tasks.ceph.osd.2.vm01.stderr:2026-03-21T12:31:55.609+0000 7fb17c803640 -1 osd.2 0 waiting for initial osdmap 2026-03-21T12:31:55.613 INFO:tasks.ceph.osd.2.vm01.stderr:2026-03-21T12:31:55.609+0000 7fb176dff640 -1 osd.2 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-21T12:31:56.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:31:56.009+0000 7efe38f1d640 -1 mgr.server handle_report got status from non-daemon mon.a 2026-03-21T12:32:00.140 DEBUG:teuthology.orchestra.run.vm01:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-03-21T12:32:00.301 INFO:teuthology.misc.health.vm01.stdout: 2026-03-21T12:32:00.301 INFO:teuthology.misc.health.vm01.stdout:{"epoch":11,"fsid":"2056cbb5-2007-4290-89d5-61be1cdf6e81","created":"2026-03-21T12:31:52.568842+0000","modified":"2026-03-21T12:32:00.018009+0000","last_up_change":"2026-03-21T12:31:56.587987+0000","last_in_change":"2026-03-21T12:31:53.273830+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-21T12:31:57.019034+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"11","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.9900000095367432,"score_stable":2.9900000095367432,"optimal_score":0.67000001668930054,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"bee261fd-0812-4d61-8933-bbb968691bc0","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6816","nonce":3921196566},{"type":"v1","addr":"192.168.123.101:6817","nonce":3921196566}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6818","nonce":3921196566},{"type":"v1","addr":"192.168.123.101:6819","nonce":3921196566}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6822","nonce":3921196566},{"type":"v1","addr":"192.168.123.101:6823","nonce":3921196566}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6820","nonce":3921196566},{"type":"v1","addr":"192.168.123.101:6821","nonce":3921196566}]},"public_addr":"192.168.123.101:6817/3921196566","cluster_addr":"192.168.123.101:6819/3921196566","heartbeat_back_addr":"192.168.123.101:6823/3921196566","heartbeat_front_addr":"192.168.123.101:6821/3921196566","state":["exists","up"]},{"osd":1,"uuid":"f46449e1-d22f-4b12-a276-a42ee51c97de","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":9,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":1915343733},{"type":"v1","addr":"192.168.123.101:6809","nonce":1915343733}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6810","nonce":1915343733},{"type":"v1","addr":"192.168.123.101:6811","nonce":1915343733}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6814","nonce":1915343733},{"type":"v1","addr":"192.168.123.101:6815","nonce":1915343733}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6812","nonce":1915343733},{"type":"v1","addr":"192.168.123.101:6813","nonce":1915343733}]},"public_addr":"192.168.123.101:6809/1915343733","cluster_addr":"192.168.123.101:6811/1915343733","heartbeat_back_addr":"192.168.123.101:6815/1915343733","heartbeat_front_addr":"192.168.123.101:6813/1915343733","state":["exists","up"]},{"osd":2,"uuid":"2046d027-e1a7-4f9b-a4f4-771aa1f54b2e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6800","nonce":1254793177},{"type":"v1","addr":"192.168.123.101:6801","nonce":1254793177}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":1254793177},{"type":"v1","addr":"192.168.123.101:6803","nonce":1254793177}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":1254793177},{"type":"v1","addr":"192.168.123.101:6807","nonce":1254793177}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":1254793177},{"type":"v1","addr":"192.168.123.101:6805","nonce":1254793177}]},"public_addr":"192.168.123.101:6801/1254793177","cluster_addr":"192.168.123.101:6803/1254793177","heartbeat_back_addr":"192.168.123.101:6807/1254793177","heartbeat_front_addr":"192.168.123.101:6805/1254793177","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-21T12:32:00.314 DEBUG:teuthology.misc:3 of 3 OSDs are up 2026-03-21T12:32:00.314 INFO:tasks.ceph:Creating RBD pool 2026-03-21T12:32:00.314 DEBUG:teuthology.orchestra.run.vm01:> sudo ceph --cluster ceph osd pool create rbd 8 2026-03-21T12:32:01.032 INFO:teuthology.orchestra.run.vm01.stderr:pool 'rbd' created 2026-03-21T12:32:01.051 DEBUG:teuthology.orchestra.run.vm01:> rbd --cluster ceph pool init rbd 2026-03-21T12:32:04.055 INFO:tasks.ceph:Starting mds daemons in cluster ceph... 2026-03-21T12:32:04.055 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph config log 1 --format=json 2026-03-21T12:32:04.055 INFO:tasks.daemonwatchdog.daemon_watchdog:watchdog starting 2026-03-21T12:32:04.227 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:32:04.241 INFO:teuthology.orchestra.run.vm01.stdout:[{"version":1,"timestamp":"0.000000","name":"","changes":[]}] 2026-03-21T12:32:04.241 INFO:tasks.ceph_manager:config epoch is 1 2026-03-21T12:32:04.241 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-21T12:32:04.241 INFO:tasks.ceph.ceph_manager.ceph:waiting for mgr available 2026-03-21T12:32:04.241 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph mgr dump --format=json 2026-03-21T12:32:04.433 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:32:04.448 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":5,"flags":0,"active_gid":4105,"active_name":"x","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6824","nonce":4068176778},{"type":"v1","addr":"192.168.123.101:6825","nonce":4068176778}]},"active_addr":"192.168.123.101:6825/4068176778","active_change":"2026-03-21T12:31:55.002008+0000","active_mgr_features":4544132024016699391,"available":true,"standbys":[],"modules":["iostat","nfs"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to, use commas to separate multiple","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"certificate_automated_rotation_enabled":{"name":"certificate_automated_rotation_enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"This flag controls whether cephadm automatically rotates certificates upon expiration.","long_desc":"","tags":[],"see_also":[]},"certificate_check_debug_mode":{"name":"certificate_check_debug_mode","type":"bool","level":"dev","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"FOR TESTING ONLY: This flag forces the certificate check instead of waiting for certificate_check_period.","long_desc":"","tags":[],"see_also":[]},"certificate_check_period":{"name":"certificate_check_period","type":"int","level":"advanced","flags":0,"default_value":"1","min":"0","max":"30","enum_allowed":[],"desc":"Specifies how often (in days) the certificate should be checked for validity.","long_desc":"","tags":[],"see_also":[]},"certificate_duration_days":{"name":"certificate_duration_days","type":"int","level":"advanced","flags":0,"default_value":"1095","min":"90","max":"3650","enum_allowed":[],"desc":"Specifies the duration of self certificates generated and signed by cephadm root CA","long_desc":"","tags":[],"see_also":[]},"certificate_renewal_threshold_days":{"name":"certificate_renewal_threshold_days","type":"int","level":"advanced","flags":0,"default_value":"30","min":"10","max":"90","enum_allowed":[],"desc":"Specifies the lead time in days to initiate certificate renewal before expiration.","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.28.1","min":"","max":"","enum_allowed":[],"desc":"Alertmanager container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"Elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:12.3.1","min":"","max":"","enum_allowed":[],"desc":"Grafana container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"Haproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_nginx":{"name":"container_image_nginx","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nginx:sclorg-nginx-126","min":"","max":"","enum_allowed":[],"desc":"Nginx container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.9.1","min":"","max":"","enum_allowed":[],"desc":"Node exporter container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.5","min":"","max":"","enum_allowed":[],"desc":"Nvmeof container image","long_desc":"","tags":[],"see_also":[]},"container_image_oauth2_proxy":{"name":"container_image_oauth2_proxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/oauth2-proxy/oauth2-proxy:v7.6.0","min":"","max":"","enum_allowed":[],"desc":"Oauth2 proxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v3.6.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba_metrics":{"name":"container_image_samba_metrics","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-metrics:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba metrics container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"Snmp gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"stray_daemon_check_interval":{"name":"stray_daemon_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"how frequently cephadm should check for the presence of stray daemons","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MANAGED_BY_CLUSTERS":{"name":"MANAGED_BY_CLUSTERS","type":"str","level":"advanced","flags":0,"default_value":"[]","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MULTICLUSTER_CONFIG":{"name":"MULTICLUSTER_CONFIG","type":"str","level":"advanced","flags":0,"default_value":"{}","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROM_ALERT_CREDENTIAL_CACHE_TTL":{"name":"PROM_ALERT_CREDENTIAL_CACHE_TTL","type":"int","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_HOSTNAME_PER_DAEMON":{"name":"RGW_HOSTNAME_PER_DAEMON","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crypto_caller":{"name":"crypto_caller","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sso_oauth2":{"name":"sso_oauth2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_cloning":{"name":"pause_cloning","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_purging":{"name":"pause_purging","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous subvolume purge threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"tentacle":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":0,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":10181107}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":2013676310}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":1743657854}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.101:0","nonce":139232892}]}]} 2026-03-21T12:32:04.448 INFO:tasks.ceph.ceph_manager.ceph:mgr available! 2026-03-21T12:32:04.448 INFO:tasks.ceph.ceph_manager.ceph:waiting for all up 2026-03-21T12:32:04.448 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-21T12:32:04.619 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:32:04.619 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":15,"fsid":"2056cbb5-2007-4290-89d5-61be1cdf6e81","created":"2026-03-21T12:31:52.568842+0000","modified":"2026-03-21T12:32:04.041975+0000","last_up_change":"2026-03-21T12:31:56.587987+0000","last_in_change":"2026-03-21T12:31:53.273830+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-21T12:31:57.019034+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"11","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.9900000095367432,"score_stable":2.9900000095367432,"optimal_score":0.67000001668930054,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"rbd","create_time":"2026-03-21T12:32:00.483849+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"15","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":15,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.8799999952316284,"score_stable":1.8799999952316284,"optimal_score":1,"raw_score_acting":1.8799999952316284,"raw_score_stable":1.8799999952316284,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"bee261fd-0812-4d61-8933-bbb968691bc0","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":12,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6816","nonce":3921196566},{"type":"v1","addr":"192.168.123.101:6817","nonce":3921196566}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6818","nonce":3921196566},{"type":"v1","addr":"192.168.123.101:6819","nonce":3921196566}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6822","nonce":3921196566},{"type":"v1","addr":"192.168.123.101:6823","nonce":3921196566}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6820","nonce":3921196566},{"type":"v1","addr":"192.168.123.101:6821","nonce":3921196566}]},"public_addr":"192.168.123.101:6817/3921196566","cluster_addr":"192.168.123.101:6819/3921196566","heartbeat_back_addr":"192.168.123.101:6823/3921196566","heartbeat_front_addr":"192.168.123.101:6821/3921196566","state":["exists","up"]},{"osd":1,"uuid":"f46449e1-d22f-4b12-a276-a42ee51c97de","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":12,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":1915343733},{"type":"v1","addr":"192.168.123.101:6809","nonce":1915343733}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6810","nonce":1915343733},{"type":"v1","addr":"192.168.123.101:6811","nonce":1915343733}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6814","nonce":1915343733},{"type":"v1","addr":"192.168.123.101:6815","nonce":1915343733}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6812","nonce":1915343733},{"type":"v1","addr":"192.168.123.101:6813","nonce":1915343733}]},"public_addr":"192.168.123.101:6809/1915343733","cluster_addr":"192.168.123.101:6811/1915343733","heartbeat_back_addr":"192.168.123.101:6815/1915343733","heartbeat_front_addr":"192.168.123.101:6813/1915343733","state":["exists","up"]},{"osd":2,"uuid":"2046d027-e1a7-4f9b-a4f4-771aa1f54b2e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":12,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6800","nonce":1254793177},{"type":"v1","addr":"192.168.123.101:6801","nonce":1254793177}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":1254793177},{"type":"v1","addr":"192.168.123.101:6803","nonce":1254793177}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":1254793177},{"type":"v1","addr":"192.168.123.101:6807","nonce":1254793177}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":1254793177},{"type":"v1","addr":"192.168.123.101:6805","nonce":1254793177}]},"public_addr":"192.168.123.101:6801/1254793177","cluster_addr":"192.168.123.101:6803/1254793177","heartbeat_back_addr":"192.168.123.101:6807/1254793177","heartbeat_front_addr":"192.168.123.101:6805/1254793177","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-21T12:32:04.634 INFO:tasks.ceph.ceph_manager.ceph:all up! 2026-03-21T12:32:04.634 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-21T12:32:04.801 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:32:04.801 INFO:teuthology.orchestra.run.vm01.stdout:{"epoch":15,"fsid":"2056cbb5-2007-4290-89d5-61be1cdf6e81","created":"2026-03-21T12:31:52.568842+0000","modified":"2026-03-21T12:32:04.041975+0000","last_up_change":"2026-03-21T12:31:56.587987+0000","last_in_change":"2026-03-21T12:31:53.273830+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-21T12:31:57.019034+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"11","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.9900000095367432,"score_stable":2.9900000095367432,"optimal_score":0.67000001668930054,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"rbd","create_time":"2026-03-21T12:32:00.483849+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"15","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":15,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.8799999952316284,"score_stable":1.8799999952316284,"optimal_score":1,"raw_score_acting":1.8799999952316284,"raw_score_stable":1.8799999952316284,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"bee261fd-0812-4d61-8933-bbb968691bc0","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":12,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6816","nonce":3921196566},{"type":"v1","addr":"192.168.123.101:6817","nonce":3921196566}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6818","nonce":3921196566},{"type":"v1","addr":"192.168.123.101:6819","nonce":3921196566}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6822","nonce":3921196566},{"type":"v1","addr":"192.168.123.101:6823","nonce":3921196566}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6820","nonce":3921196566},{"type":"v1","addr":"192.168.123.101:6821","nonce":3921196566}]},"public_addr":"192.168.123.101:6817/3921196566","cluster_addr":"192.168.123.101:6819/3921196566","heartbeat_back_addr":"192.168.123.101:6823/3921196566","heartbeat_front_addr":"192.168.123.101:6821/3921196566","state":["exists","up"]},{"osd":1,"uuid":"f46449e1-d22f-4b12-a276-a42ee51c97de","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":12,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6808","nonce":1915343733},{"type":"v1","addr":"192.168.123.101:6809","nonce":1915343733}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6810","nonce":1915343733},{"type":"v1","addr":"192.168.123.101:6811","nonce":1915343733}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6814","nonce":1915343733},{"type":"v1","addr":"192.168.123.101:6815","nonce":1915343733}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6812","nonce":1915343733},{"type":"v1","addr":"192.168.123.101:6813","nonce":1915343733}]},"public_addr":"192.168.123.101:6809/1915343733","cluster_addr":"192.168.123.101:6811/1915343733","heartbeat_back_addr":"192.168.123.101:6815/1915343733","heartbeat_front_addr":"192.168.123.101:6813/1915343733","state":["exists","up"]},{"osd":2,"uuid":"2046d027-e1a7-4f9b-a4f4-771aa1f54b2e","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":12,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6800","nonce":1254793177},{"type":"v1","addr":"192.168.123.101:6801","nonce":1254793177}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6802","nonce":1254793177},{"type":"v1","addr":"192.168.123.101:6803","nonce":1254793177}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6806","nonce":1254793177},{"type":"v1","addr":"192.168.123.101:6807","nonce":1254793177}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.101:6804","nonce":1254793177},{"type":"v1","addr":"192.168.123.101:6805","nonce":1254793177}]},"public_addr":"192.168.123.101:6801/1254793177","cluster_addr":"192.168.123.101:6803/1254793177","heartbeat_back_addr":"192.168.123.101:6807/1254793177","heartbeat_front_addr":"192.168.123.101:6805/1254793177","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-21T12:32:04.815 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 flush_pg_stats 2026-03-21T12:32:04.816 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 flush_pg_stats 2026-03-21T12:32:04.816 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.2 flush_pg_stats 2026-03-21T12:32:04.913 INFO:teuthology.orchestra.run.vm01.stdout:34359738371 2026-03-21T12:32:04.913 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-21T12:32:04.919 INFO:teuthology.orchestra.run.vm01.stdout:34359738371 2026-03-21T12:32:04.920 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-21T12:32:04.920 INFO:teuthology.orchestra.run.vm01.stdout:34359738371 2026-03-21T12:32:04.920 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-21T12:32:05.096 INFO:teuthology.orchestra.run.vm01.stdout:34359738371 2026-03-21T12:32:05.102 INFO:teuthology.orchestra.run.vm01.stdout:34359738371 2026-03-21T12:32:05.111 INFO:tasks.ceph.ceph_manager.ceph:need seq 34359738371 got 34359738371 for osd.2 2026-03-21T12:32:05.112 DEBUG:teuthology.parallel:result is None 2026-03-21T12:32:05.117 INFO:tasks.ceph.ceph_manager.ceph:need seq 34359738371 got 34359738371 for osd.0 2026-03-21T12:32:05.117 DEBUG:teuthology.parallel:result is None 2026-03-21T12:32:05.142 INFO:teuthology.orchestra.run.vm01.stdout:34359738371 2026-03-21T12:32:05.156 INFO:tasks.ceph.ceph_manager.ceph:need seq 34359738371 got 34359738371 for osd.1 2026-03-21T12:32:05.156 DEBUG:teuthology.parallel:result is None 2026-03-21T12:32:05.156 INFO:tasks.ceph.ceph_manager.ceph:waiting for clean 2026-03-21T12:32:05.156 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-21T12:32:05.358 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:32:05.359 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-21T12:32:05.372 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":16,"stamp":"2026-03-21T12:32:05.008560+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459299,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":59,"num_write_kb":586,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":35,"ondisk_log_size":35,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":13,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":283115520,"kb_used":81376,"kb_used_data":848,"kb_used_omap":24,"kb_used_meta":80423,"kb_avail":283034144,"statfs":{"total":289910292480,"available":289826963456,"internally_reserved":0,"allocated":868352,"data_stored":1025927,"data_compressed":5428,"data_compressed_allocated":442368,"data_compressed_original":884736,"omap_allocated":25013,"internal_metadata":82353739},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":4,"apply_latency_ms":4,"commit_latency_ns":4000000,"apply_latency_ns":4000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"2.972541"},"pg_stats":[{"pgid":"2.7","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.047269+0000","last_change":"2026-03-21T12:32:04.047485+0000","last_active":"2026-03-21T12:32:04.047269+0000","last_peered":"2026-03-21T12:32:04.047269+0000","last_clean":"2026-03-21T12:32:04.047269+0000","last_became_active":"2026-03-21T12:32:02.042022+0000","last_became_peered":"2026-03-21T12:32:02.042022+0000","last_unstale":"2026-03-21T12:32:04.047269+0000","last_undegraded":"2026-03-21T12:32:04.047269+0000","last_fullsized":"2026-03-21T12:32:04.047269+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T18:17:59.087233+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00034794099999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.6","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.047280+0000","last_change":"2026-03-21T12:32:04.047502+0000","last_active":"2026-03-21T12:32:04.047280+0000","last_peered":"2026-03-21T12:32:04.047280+0000","last_clean":"2026-03-21T12:32:04.047280+0000","last_became_active":"2026-03-21T12:32:02.042402+0000","last_became_peered":"2026-03-21T12:32:02.042402+0000","last_unstale":"2026-03-21T12:32:04.047280+0000","last_undegraded":"2026-03-21T12:32:04.047280+0000","last_fullsized":"2026-03-21T12:32:04.047280+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T17:15:22.015918+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000260948,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.5","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.047497+0000","last_change":"2026-03-21T12:32:04.047552+0000","last_active":"2026-03-21T12:32:04.047497+0000","last_peered":"2026-03-21T12:32:04.047497+0000","last_clean":"2026-03-21T12:32:04.047497+0000","last_became_active":"2026-03-21T12:32:02.040497+0000","last_became_peered":"2026-03-21T12:32:02.040497+0000","last_unstale":"2026-03-21T12:32:04.047497+0000","last_undegraded":"2026-03-21T12:32:04.047497+0000","last_fullsized":"2026-03-21T12:32:04.047497+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T17:57:37.640569+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00032821400000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.4","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.047519+0000","last_change":"2026-03-21T12:32:04.047588+0000","last_active":"2026-03-21T12:32:04.047519+0000","last_peered":"2026-03-21T12:32:04.047519+0000","last_clean":"2026-03-21T12:32:04.047519+0000","last_became_active":"2026-03-21T12:32:02.041010+0000","last_became_peered":"2026-03-21T12:32:02.041010+0000","last_unstale":"2026-03-21T12:32:04.047519+0000","last_undegraded":"2026-03-21T12:32:04.047519+0000","last_fullsized":"2026-03-21T12:32:04.047519+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T17:02:22.625758+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00024645,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.2","version":"15'2","reported_seq":22,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.050208+0000","last_change":"2026-03-21T12:32:04.050208+0000","last_active":"2026-03-21T12:32:04.050208+0000","last_peered":"2026-03-21T12:32:04.050208+0000","last_clean":"2026-03-21T12:32:04.050208+0000","last_became_active":"2026-03-21T12:32:02.040500+0000","last_became_peered":"2026-03-21T12:32:02.040500+0000","last_unstale":"2026-03-21T12:32:04.050208+0000","last_undegraded":"2026-03-21T12:32:04.050208+0000","last_fullsized":"2026-03-21T12:32:04.050208+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T22:55:50.689370+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00047548000000000002,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1],"acting":[0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.1","version":"0'0","reported_seq":18,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.437901+0000","last_change":"2026-03-21T12:32:04.437997+0000","last_active":"2026-03-21T12:32:04.437901+0000","last_peered":"2026-03-21T12:32:04.437901+0000","last_clean":"2026-03-21T12:32:04.437901+0000","last_became_active":"2026-03-21T12:32:02.040709+0000","last_became_peered":"2026-03-21T12:32:02.040709+0000","last_unstale":"2026-03-21T12:32:04.437901+0000","last_undegraded":"2026-03-21T12:32:04.437901+0000","last_fullsized":"2026-03-21T12:32:04.437901+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T13:00:07.506642+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00021819799999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1],"acting":[2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"0'0","reported_seq":18,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.438016+0000","last_change":"2026-03-21T12:32:04.438072+0000","last_active":"2026-03-21T12:32:04.438016+0000","last_peered":"2026-03-21T12:32:04.438016+0000","last_clean":"2026-03-21T12:32:04.438016+0000","last_became_active":"2026-03-21T12:32:02.042543+0000","last_became_peered":"2026-03-21T12:32:02.042543+0000","last_unstale":"2026-03-21T12:32:04.438016+0000","last_undegraded":"2026-03-21T12:32:04.438016+0000","last_fullsized":"2026-03-21T12:32:04.438016+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T13:22:22.927341+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000103123,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1],"acting":[2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.3","version":"13'1","reported_seq":21,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.047631+0000","last_change":"2026-03-21T12:32:04.047697+0000","last_active":"2026-03-21T12:32:04.047631+0000","last_peered":"2026-03-21T12:32:04.047631+0000","last_clean":"2026-03-21T12:32:04.047631+0000","last_became_active":"2026-03-21T12:32:02.040472+0000","last_became_peered":"2026-03-21T12:32:02.040472+0000","last_unstale":"2026-03-21T12:32:04.047631+0000","last_undegraded":"2026-03-21T12:32:04.047631+0000","last_fullsized":"2026-03-21T12:32:04.047631+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T12:47:33.909784+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00033188100000000002,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2],"acting":[1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"10'32","reported_seq":65,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.047758+0000","last_change":"2026-03-21T12:31:59.375656+0000","last_active":"2026-03-21T12:32:04.047758+0000","last_peered":"2026-03-21T12:32:04.047758+0000","last_clean":"2026-03-21T12:32:04.047758+0000","last_became_active":"2026-03-21T12:31:59.375520+0000","last_became_peered":"2026-03-21T12:31:59.375520+0000","last_unstale":"2026-03-21T12:32:04.047758+0000","last_undegraded":"2026-03-21T12:32:04.047758+0000","last_fullsized":"2026-03-21T12:32:04.047758+0000","mapping_epoch":9,"log_start":"0'0","ondisk_log_start":"0'0","created":9,"last_epoch_clean":10,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:31:58.014888+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:31:58.014888+0000","last_clean_scrub_stamp":"2026-03-21T12:31:58.014888+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T14:59:02.990682+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":3},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":483328,"data_stored":918560,"data_compressed":5428,"data_compressed_allocated":442368,"data_compressed_original":884736,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":2,"acting":2,"num_store_stats":2}],"osd_stats":[{"osd":2,"up_from":8,"seq":34359738371,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":26960,"kb_used_data":112,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344880,"statfs":{"total":96636764160,"available":96609157120,"internally_reserved":0,"allocated":114688,"data_stored":31091,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":7471,"internal_metadata":27452113},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":2,"apply_latency_ms":2,"commit_latency_ns":2000000,"apply_latency_ns":2000000},"alerts":[]},{"osd":1,"up_from":8,"seq":34359738371,"num_pgs":9,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27208,"kb_used_data":368,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344632,"statfs":{"total":96636764160,"available":96608903168,"internally_reserved":0,"allocated":376832,"data_stored":497418,"data_compressed":2714,"data_compressed_allocated":221184,"data_compressed_original":442368,"omap_allocated":8771,"internal_metadata":27450813},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738371,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27208,"kb_used_data":368,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344632,"statfs":{"total":96636764160,"available":96608903168,"internally_reserved":0,"allocated":376832,"data_stored":497418,"data_compressed":2714,"data_compressed_allocated":221184,"data_compressed_original":442368,"omap_allocated":8771,"internal_metadata":27450813},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":241664,"data_stored":459280,"data_compressed":2714,"data_compressed_allocated":221184,"data_compressed_original":442368,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":241664,"data_stored":459280,"data_compressed":2714,"data_compressed_allocated":221184,"data_compressed_original":442368,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-21T12:32:05.373 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-21T12:32:05.538 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:32:05.538 INFO:teuthology.orchestra.run.vm01.stderr:dumped all 2026-03-21T12:32:05.552 INFO:teuthology.orchestra.run.vm01.stdout:{"pg_ready":true,"pg_map":{"version":16,"stamp":"2026-03-21T12:32:05.008560+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459299,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":59,"num_write_kb":586,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":35,"ondisk_log_size":35,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":13,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":283115520,"kb_used":81376,"kb_used_data":848,"kb_used_omap":24,"kb_used_meta":80423,"kb_avail":283034144,"statfs":{"total":289910292480,"available":289826963456,"internally_reserved":0,"allocated":868352,"data_stored":1025927,"data_compressed":5428,"data_compressed_allocated":442368,"data_compressed_original":884736,"omap_allocated":25013,"internal_metadata":82353739},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":4,"apply_latency_ms":4,"commit_latency_ns":4000000,"apply_latency_ns":4000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"2.972541"},"pg_stats":[{"pgid":"2.7","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.047269+0000","last_change":"2026-03-21T12:32:04.047485+0000","last_active":"2026-03-21T12:32:04.047269+0000","last_peered":"2026-03-21T12:32:04.047269+0000","last_clean":"2026-03-21T12:32:04.047269+0000","last_became_active":"2026-03-21T12:32:02.042022+0000","last_became_peered":"2026-03-21T12:32:02.042022+0000","last_unstale":"2026-03-21T12:32:04.047269+0000","last_undegraded":"2026-03-21T12:32:04.047269+0000","last_fullsized":"2026-03-21T12:32:04.047269+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T18:17:59.087233+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00034794099999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.6","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.047280+0000","last_change":"2026-03-21T12:32:04.047502+0000","last_active":"2026-03-21T12:32:04.047280+0000","last_peered":"2026-03-21T12:32:04.047280+0000","last_clean":"2026-03-21T12:32:04.047280+0000","last_became_active":"2026-03-21T12:32:02.042402+0000","last_became_peered":"2026-03-21T12:32:02.042402+0000","last_unstale":"2026-03-21T12:32:04.047280+0000","last_undegraded":"2026-03-21T12:32:04.047280+0000","last_fullsized":"2026-03-21T12:32:04.047280+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T17:15:22.015918+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000260948,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.5","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.047497+0000","last_change":"2026-03-21T12:32:04.047552+0000","last_active":"2026-03-21T12:32:04.047497+0000","last_peered":"2026-03-21T12:32:04.047497+0000","last_clean":"2026-03-21T12:32:04.047497+0000","last_became_active":"2026-03-21T12:32:02.040497+0000","last_became_peered":"2026-03-21T12:32:02.040497+0000","last_unstale":"2026-03-21T12:32:04.047497+0000","last_undegraded":"2026-03-21T12:32:04.047497+0000","last_fullsized":"2026-03-21T12:32:04.047497+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T17:57:37.640569+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00032821400000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.4","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.047519+0000","last_change":"2026-03-21T12:32:04.047588+0000","last_active":"2026-03-21T12:32:04.047519+0000","last_peered":"2026-03-21T12:32:04.047519+0000","last_clean":"2026-03-21T12:32:04.047519+0000","last_became_active":"2026-03-21T12:32:02.041010+0000","last_became_peered":"2026-03-21T12:32:02.041010+0000","last_unstale":"2026-03-21T12:32:04.047519+0000","last_undegraded":"2026-03-21T12:32:04.047519+0000","last_fullsized":"2026-03-21T12:32:04.047519+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T17:02:22.625758+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00024645,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.2","version":"15'2","reported_seq":22,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.050208+0000","last_change":"2026-03-21T12:32:04.050208+0000","last_active":"2026-03-21T12:32:04.050208+0000","last_peered":"2026-03-21T12:32:04.050208+0000","last_clean":"2026-03-21T12:32:04.050208+0000","last_became_active":"2026-03-21T12:32:02.040500+0000","last_became_peered":"2026-03-21T12:32:02.040500+0000","last_unstale":"2026-03-21T12:32:04.050208+0000","last_undegraded":"2026-03-21T12:32:04.050208+0000","last_fullsized":"2026-03-21T12:32:04.050208+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T22:55:50.689370+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00047548000000000002,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1],"acting":[0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.1","version":"0'0","reported_seq":18,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.437901+0000","last_change":"2026-03-21T12:32:04.437997+0000","last_active":"2026-03-21T12:32:04.437901+0000","last_peered":"2026-03-21T12:32:04.437901+0000","last_clean":"2026-03-21T12:32:04.437901+0000","last_became_active":"2026-03-21T12:32:02.040709+0000","last_became_peered":"2026-03-21T12:32:02.040709+0000","last_unstale":"2026-03-21T12:32:04.437901+0000","last_undegraded":"2026-03-21T12:32:04.437901+0000","last_fullsized":"2026-03-21T12:32:04.437901+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T13:00:07.506642+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00021819799999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1],"acting":[2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"0'0","reported_seq":18,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.438016+0000","last_change":"2026-03-21T12:32:04.438072+0000","last_active":"2026-03-21T12:32:04.438016+0000","last_peered":"2026-03-21T12:32:04.438016+0000","last_clean":"2026-03-21T12:32:04.438016+0000","last_became_active":"2026-03-21T12:32:02.042543+0000","last_became_peered":"2026-03-21T12:32:02.042543+0000","last_unstale":"2026-03-21T12:32:04.438016+0000","last_undegraded":"2026-03-21T12:32:04.438016+0000","last_fullsized":"2026-03-21T12:32:04.438016+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T13:22:22.927341+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000103123,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1],"acting":[2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.3","version":"13'1","reported_seq":21,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.047631+0000","last_change":"2026-03-21T12:32:04.047697+0000","last_active":"2026-03-21T12:32:04.047631+0000","last_peered":"2026-03-21T12:32:04.047631+0000","last_clean":"2026-03-21T12:32:04.047631+0000","last_became_active":"2026-03-21T12:32:02.040472+0000","last_became_peered":"2026-03-21T12:32:02.040472+0000","last_unstale":"2026-03-21T12:32:04.047631+0000","last_undegraded":"2026-03-21T12:32:04.047631+0000","last_fullsized":"2026-03-21T12:32:04.047631+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:32:01.028161+0000","last_clean_scrub_stamp":"2026-03-21T12:32:01.028161+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T12:47:33.909784+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00033188100000000002,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2],"acting":[1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"10'32","reported_seq":65,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-21T12:32:04.047758+0000","last_change":"2026-03-21T12:31:59.375656+0000","last_active":"2026-03-21T12:32:04.047758+0000","last_peered":"2026-03-21T12:32:04.047758+0000","last_clean":"2026-03-21T12:32:04.047758+0000","last_became_active":"2026-03-21T12:31:59.375520+0000","last_became_peered":"2026-03-21T12:31:59.375520+0000","last_unstale":"2026-03-21T12:32:04.047758+0000","last_undegraded":"2026-03-21T12:32:04.047758+0000","last_fullsized":"2026-03-21T12:32:04.047758+0000","mapping_epoch":9,"log_start":"0'0","ondisk_log_start":"0'0","created":9,"last_epoch_clean":10,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-21T12:31:58.014888+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-21T12:31:58.014888+0000","last_clean_scrub_stamp":"2026-03-21T12:31:58.014888+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-22T14:59:02.990682+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":3},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":483328,"data_stored":918560,"data_compressed":5428,"data_compressed_allocated":442368,"data_compressed_original":884736,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":2,"acting":2,"num_store_stats":2}],"osd_stats":[{"osd":2,"up_from":8,"seq":34359738371,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":26960,"kb_used_data":112,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":94344880,"statfs":{"total":96636764160,"available":96609157120,"internally_reserved":0,"allocated":114688,"data_stored":31091,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":7471,"internal_metadata":27452113},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":2,"apply_latency_ms":2,"commit_latency_ns":2000000,"apply_latency_ns":2000000},"alerts":[]},{"osd":1,"up_from":8,"seq":34359738371,"num_pgs":9,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27208,"kb_used_data":368,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344632,"statfs":{"total":96636764160,"available":96608903168,"internally_reserved":0,"allocated":376832,"data_stored":497418,"data_compressed":2714,"data_compressed_allocated":221184,"data_compressed_original":442368,"omap_allocated":8771,"internal_metadata":27450813},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738371,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":94371840,"kb_used":27208,"kb_used_data":368,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":94344632,"statfs":{"total":96636764160,"available":96608903168,"internally_reserved":0,"allocated":376832,"data_stored":497418,"data_compressed":2714,"data_compressed_allocated":221184,"data_compressed_original":442368,"omap_allocated":8771,"internal_metadata":27450813},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":241664,"data_stored":459280,"data_compressed":2714,"data_compressed_allocated":221184,"data_compressed_original":442368,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":241664,"data_stored":459280,"data_compressed":2714,"data_compressed_allocated":221184,"data_compressed_original":442368,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-21T12:32:05.553 INFO:tasks.ceph.ceph_manager.ceph:clean! 2026-03-21T12:32:05.553 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-21T12:32:05.553 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy 2026-03-21T12:32:05.553 DEBUG:teuthology.orchestra.run.vm01:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph health --format=json 2026-03-21T12:32:05.732 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:32:05.732 INFO:teuthology.orchestra.run.vm01.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-21T12:32:05.746 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy done 2026-03-21T12:32:05.746 INFO:teuthology.run_tasks:Running task exec... 2026-03-21T12:32:05.749 INFO:teuthology.task.exec:Executing custom commands... 2026-03-21T12:32:05.749 INFO:teuthology.task.exec:Running commands on role client.0 host ubuntu@vm01.local 2026-03-21T12:32:05.749 DEBUG:teuthology.orchestra.run.vm01:> sudo TESTDIR=/home/ubuntu/cephtest bash -c 'sudo ceph osd pool create datapool 4' 2026-03-21T12:32:06.059 INFO:teuthology.orchestra.run.vm01.stderr:pool 'datapool' created 2026-03-21T12:32:06.076 DEBUG:teuthology.orchestra.run.vm01:> sudo TESTDIR=/home/ubuntu/cephtest bash -c 'rbd pool init datapool' 2026-03-21T12:32:08.569 INFO:teuthology.run_tasks:Running task install... 2026-03-21T12:32:08.571 DEBUG:teuthology.task.install:project ceph 2026-03-21T12:32:08.571 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-03-21T12:32:08.571 DEBUG:teuthology.task.install:config {'extra_system_packages': {'deb': ['fio', 'python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['fio', 'bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'} 2026-03-21T12:32:08.571 INFO:teuthology.task.install:Using flavor: default 2026-03-21T12:32:08.574 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-21T12:32:08.574 INFO:teuthology.task.install:extra packages: [] 2026-03-21T12:32:08.574 DEBUG:teuthology.orchestra.run.vm01:> sudo apt-key list | grep Ceph 2026-03-21T12:32:08.614 INFO:teuthology.orchestra.run.vm01.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-21T12:32:08.635 INFO:teuthology.orchestra.run.vm01.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-21T12:32:08.635 INFO:teuthology.orchestra.run.vm01.stdout:uid [ unknown] Ceph.com (release key) 2026-03-21T12:32:08.635 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-21T12:32:08.635 INFO:teuthology.task.install.deb:Installing system (non-project) packages: fio, python3-jmespath, python3-xmltodict, s3cmd on remote deb x86_64 2026-03-21T12:32:08.635 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-21T12:32:09.214 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default/ 2026-03-21T12:32:09.214 INFO:teuthology.task.install.deb:Package version is 20.2.0-712-g70f8415b-1jammy 2026-03-21T12:32:09.717 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:32:09.718 DEBUG:teuthology.orchestra.run.vm01:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-21T12:32:09.727 DEBUG:teuthology.orchestra.run.vm01:> sudo apt-get update 2026-03-21T12:32:09.902 INFO:teuthology.orchestra.run.vm01.stdout:Hit:1 http://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-21T12:32:10.083 INFO:teuthology.orchestra.run.vm01.stdout:Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-21T12:32:10.185 INFO:teuthology.orchestra.run.vm01.stdout:Hit:3 http://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-21T12:32:10.280 INFO:teuthology.orchestra.run.vm01.stdout:Ign:4 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy InRelease 2026-03-21T12:32:10.288 INFO:teuthology.orchestra.run.vm01.stdout:Hit:5 http://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-21T12:32:10.392 INFO:teuthology.orchestra.run.vm01.stdout:Hit:6 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy Release 2026-03-21T12:32:10.504 INFO:teuthology.orchestra.run.vm01.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-21T12:32:11.277 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-21T12:32:11.289 DEBUG:teuthology.orchestra.run.vm01:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=20.2.0-712-g70f8415b-1jammy cephadm=20.2.0-712-g70f8415b-1jammy ceph-mds=20.2.0-712-g70f8415b-1jammy ceph-mgr=20.2.0-712-g70f8415b-1jammy ceph-common=20.2.0-712-g70f8415b-1jammy ceph-fuse=20.2.0-712-g70f8415b-1jammy ceph-test=20.2.0-712-g70f8415b-1jammy ceph-volume=20.2.0-712-g70f8415b-1jammy radosgw=20.2.0-712-g70f8415b-1jammy python3-rados=20.2.0-712-g70f8415b-1jammy python3-rgw=20.2.0-712-g70f8415b-1jammy python3-cephfs=20.2.0-712-g70f8415b-1jammy python3-rbd=20.2.0-712-g70f8415b-1jammy libcephfs2=20.2.0-712-g70f8415b-1jammy libcephfs-dev=20.2.0-712-g70f8415b-1jammy librados2=20.2.0-712-g70f8415b-1jammy librbd1=20.2.0-712-g70f8415b-1jammy rbd-fuse=20.2.0-712-g70f8415b-1jammy 2026-03-21T12:32:11.323 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-21T12:32:11.518 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-21T12:32:11.518 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:ceph is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:ceph-common is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:ceph-fuse is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:ceph-mds is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:ceph-mgr is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:ceph-test is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:ceph-volume is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:cephadm is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:libcephfs-dev is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:libcephfs2 is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:librados2 is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:librados2 set to manually installed. 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:librbd1 is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:librbd1 set to manually installed. 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:python3-cephfs is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:python3-rados is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:python3-rbd is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:python3-rgw is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:radosgw is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:rbd-fuse is already the newest version (20.2.0-712-g70f8415b-1jammy). 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-21T12:32:11.647 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-21T12:32:11.674 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 0 newly installed, 0 to remove and 36 not upgraded. 2026-03-21T12:32:11.674 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-21T12:32:11.676 DEBUG:teuthology.orchestra.run.vm01:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install fio python3-jmespath python3-xmltodict s3cmd 2026-03-21T12:32:11.756 INFO:teuthology.orchestra.run.vm01.stdout:Reading package lists... 2026-03-21T12:32:11.938 INFO:teuthology.orchestra.run.vm01.stdout:Building dependency tree... 2026-03-21T12:32:11.939 INFO:teuthology.orchestra.run.vm01.stdout:Reading state information... 2026-03-21T12:32:12.067 INFO:teuthology.orchestra.run.vm01.stdout:python3-jmespath is already the newest version (0.10.0-1). 2026-03-21T12:32:12.067 INFO:teuthology.orchestra.run.vm01.stdout:python3-xmltodict is already the newest version (0.12.0-2). 2026-03-21T12:32:12.067 INFO:teuthology.orchestra.run.vm01.stdout:s3cmd is already the newest version (2.2.0-1). 2026-03-21T12:32:12.067 INFO:teuthology.orchestra.run.vm01.stdout:The following packages were automatically installed and are no longer required: 2026-03-21T12:32:12.067 INFO:teuthology.orchestra.run.vm01.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-21T12:32:12.067 INFO:teuthology.orchestra.run.vm01.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-21T12:32:12.067 INFO:teuthology.orchestra.run.vm01.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-21T12:32:12.068 INFO:teuthology.orchestra.run.vm01.stdout:The following additional packages will be installed: 2026-03-21T12:32:12.068 INFO:teuthology.orchestra.run.vm01.stdout: libpmemblk1 2026-03-21T12:32:12.068 INFO:teuthology.orchestra.run.vm01.stdout:Suggested packages: 2026-03-21T12:32:12.068 INFO:teuthology.orchestra.run.vm01.stdout: gnuplot gfio python-scipy 2026-03-21T12:32:12.081 INFO:teuthology.orchestra.run.vm01.stdout:The following NEW packages will be installed: 2026-03-21T12:32:12.082 INFO:teuthology.orchestra.run.vm01.stdout: fio libpmemblk1 2026-03-21T12:32:12.104 INFO:teuthology.orchestra.run.vm01.stdout:0 upgraded, 2 newly installed, 0 to remove and 36 not upgraded. 2026-03-21T12:32:12.104 INFO:teuthology.orchestra.run.vm01.stdout:Need to get 4112 kB of archives. 2026-03-21T12:32:12.104 INFO:teuthology.orchestra.run.vm01.stdout:After this operation, 7813 kB of additional disk space will be used. 2026-03-21T12:32:12.104 INFO:teuthology.orchestra.run.vm01.stdout:Get:1 http://archive.ubuntu.com/ubuntu jammy/universe amd64 libpmemblk1 amd64 1.11.1-3build1 [65.6 kB] 2026-03-21T12:32:12.129 INFO:teuthology.orchestra.run.vm01.stdout:Get:2 http://archive.ubuntu.com/ubuntu jammy/universe amd64 fio amd64 3.28-1 [4047 kB] 2026-03-21T12:32:12.391 INFO:teuthology.orchestra.run.vm01.stdout:Fetched 4112 kB in 0s (34.2 MB/s) 2026-03-21T12:32:12.410 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package libpmemblk1:amd64. 2026-03-21T12:32:12.441 INFO:teuthology.orchestra.run.vm01.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 126150 files and directories currently installed.) 2026-03-21T12:32:12.443 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../libpmemblk1_1.11.1-3build1_amd64.deb ... 2026-03-21T12:32:12.445 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking libpmemblk1:amd64 (1.11.1-3build1) ... 2026-03-21T12:32:12.461 INFO:teuthology.orchestra.run.vm01.stdout:Selecting previously unselected package fio. 2026-03-21T12:32:12.467 INFO:teuthology.orchestra.run.vm01.stdout:Preparing to unpack .../archives/fio_3.28-1_amd64.deb ... 2026-03-21T12:32:12.468 INFO:teuthology.orchestra.run.vm01.stdout:Unpacking fio (3.28-1) ... 2026-03-21T12:32:12.537 INFO:teuthology.orchestra.run.vm01.stdout:Setting up libpmemblk1:amd64 (1.11.1-3build1) ... 2026-03-21T12:32:12.539 INFO:teuthology.orchestra.run.vm01.stdout:Setting up fio (3.28-1) ... 2026-03-21T12:32:12.781 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-21T12:32:12.827 INFO:teuthology.orchestra.run.vm01.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-21T12:32:13.181 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:32:13.181 INFO:teuthology.orchestra.run.vm01.stdout:Running kernel seems to be up-to-date. 2026-03-21T12:32:13.181 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:32:13.181 INFO:teuthology.orchestra.run.vm01.stdout:Services to be restarted: 2026-03-21T12:32:13.183 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart apache-htcacheclean.service 2026-03-21T12:32:13.190 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart rsyslog.service 2026-03-21T12:32:13.193 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:32:13.193 INFO:teuthology.orchestra.run.vm01.stdout:Service restarts being deferred: 2026-03-21T12:32:13.193 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart networkd-dispatcher.service 2026-03-21T12:32:13.193 INFO:teuthology.orchestra.run.vm01.stdout: systemctl restart unattended-upgrades.service 2026-03-21T12:32:13.193 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:32:13.193 INFO:teuthology.orchestra.run.vm01.stdout:No containers need to be restarted. 2026-03-21T12:32:13.193 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:32:13.193 INFO:teuthology.orchestra.run.vm01.stdout:No user sessions are running outdated binaries. 2026-03-21T12:32:13.193 INFO:teuthology.orchestra.run.vm01.stdout: 2026-03-21T12:32:13.193 INFO:teuthology.orchestra.run.vm01.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-21T12:32:14.145 INFO:teuthology.orchestra.run.vm01.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-21T12:32:14.149 DEBUG:teuthology.parallel:result is None 2026-03-21T12:32:14.149 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-21T12:32:14.714 DEBUG:teuthology.orchestra.run.vm01:> dpkg-query -W -f '${Version}' ceph 2026-03-21T12:32:14.723 INFO:teuthology.orchestra.run.vm01.stdout:20.2.0-712-g70f8415b-1jammy 2026-03-21T12:32:14.723 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-712-g70f8415b-1jammy 2026-03-21T12:32:14.723 INFO:teuthology.task.install:The correct ceph version 20.2.0-712-g70f8415b-1jammy is installed. 2026-03-21T12:32:14.724 INFO:teuthology.task.install.util:Utilities already shipped, skip it... 2026-03-21T12:32:14.724 INFO:teuthology.run_tasks:Running task workunit... 2026-03-21T12:32:14.727 INFO:tasks.workunit:Pulling workunits from ref 0392f78529848ec72469e8e431875cb98d3a5fb4 2026-03-21T12:32:14.728 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-21T12:32:14.728 DEBUG:teuthology.orchestra.run.vm01:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-21T12:32:14.770 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-21T12:32:14.770 INFO:teuthology.orchestra.run.vm01.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-21T12:32:14.770 DEBUG:teuthology.orchestra.run.vm01:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-21T12:32:14.814 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-21T12:32:14.814 DEBUG:teuthology.orchestra.run.vm01:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-21T12:32:14.858 INFO:tasks.workunit:timeout=3h 2026-03-21T12:32:14.858 INFO:tasks.workunit:cleanup=True 2026-03-21T12:32:14.858 DEBUG:teuthology.orchestra.run.vm01:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 0392f78529848ec72469e8e431875cb98d3a5fb4 2026-03-21T12:32:14.903 INFO:tasks.workunit.client.0.vm01.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr:Note: switching to '0392f78529848ec72469e8e431875cb98d3a5fb4'. 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr:state without impacting any branches by switching back to a branch. 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr: git switch -c 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr:Or undo this operation with: 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr: git switch - 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr: 2026-03-21T12:32:52.825 INFO:tasks.workunit.client.0.vm01.stderr:HEAD is now at 0392f785298 qa/tasks/keystone: restart mariadb for rocky and alma linux too 2026-03-21T12:32:52.832 DEBUG:teuthology.orchestra.run.vm01:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-21T12:32:52.878 INFO:tasks.workunit.client.0.vm01.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-21T12:32:52.879 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-21T12:32:52.879 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-21T12:32:52.918 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-21T12:32:52.948 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-21T12:32:52.971 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-21T12:32:52.971 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-21T12:32:52.971 INFO:tasks.workunit.client.0.vm01.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-21T12:32:52.994 INFO:tasks.workunit.client.0.vm01.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-21T12:32:52.997 DEBUG:teuthology.orchestra.run.vm01:> set -ex 2026-03-21T12:32:52.997 DEBUG:teuthology.orchestra.run.vm01:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-21T12:32:53.041 INFO:tasks.workunit:Running workunits matching rbd/rbd_support_module_recovery.sh on client.0... 2026-03-21T12:32:53.042 INFO:tasks.workunit:Running workunit rbd/rbd_support_module_recovery.sh... 2026-03-21T12:32:53.042 DEBUG:teuthology.orchestra.run.vm01:workunit test rbd/rbd_support_module_recovery.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=0392f78529848ec72469e8e431875cb98d3a5fb4 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rbd/rbd_support_module_recovery.sh 2026-03-21T12:32:53.087 INFO:tasks.workunit.client.0.vm01.stderr:+ POOL=rbd 2026-03-21T12:32:53.088 INFO:tasks.workunit.client.0.vm01.stderr:+ IMAGE_PREFIX=image 2026-03-21T12:32:53.088 INFO:tasks.workunit.client.0.vm01.stderr:+ NUM_IMAGES=20 2026-03-21T12:32:53.088 INFO:tasks.workunit.client.0.vm01.stderr:+ RUN_TIME=3600 2026-03-21T12:32:53.088 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror pool enable rbd image 2026-03-21T12:32:53.116 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror pool peer add rbd dummy 2026-03-21T12:32:53.137 INFO:tasks.workunit.client.0.vm01.stdout:37cd2b28-7807-49cd-bcf4-314a7c9989b1 2026-03-21T12:32:53.140 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i = 1 )) 2026-03-21T12:32:53.140 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:32:53.140 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image1 2026-03-21T12:32:53.167 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image1 snapshot 2026-03-21T12:32:53.383 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:32:53.388 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image1 1m 2026-03-21T12:32:53.421 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:32:53.421 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:32:53.421 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image2 2026-03-21T12:32:53.449 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image2 snapshot 2026-03-21T12:32:54.388 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:32:54.393 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image2 1m 2026-03-21T12:32:54.425 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:32:54.425 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:32:54.425 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image3 2026-03-21T12:32:54.452 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image3 snapshot 2026-03-21T12:32:55.419 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:32:55.424 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image3 1m 2026-03-21T12:32:55.454 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:32:55.455 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:32:55.455 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image4 2026-03-21T12:32:55.484 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image4 snapshot 2026-03-21T12:32:56.422 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:32:56.427 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image4 1m 2026-03-21T12:32:56.458 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:32:56.459 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:32:56.459 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image5 2026-03-21T12:32:56.487 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image5 snapshot 2026-03-21T12:32:57.427 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:32:57.432 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image5 1m 2026-03-21T12:32:57.462 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:32:57.462 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:32:57.462 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image6 2026-03-21T12:32:57.491 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image6 snapshot 2026-03-21T12:32:58.430 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:32:58.435 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image6 1m 2026-03-21T12:32:58.465 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:32:58.465 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:32:58.465 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image7 2026-03-21T12:32:58.493 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image7 snapshot 2026-03-21T12:32:59.432 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:32:59.437 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image7 1m 2026-03-21T12:32:59.468 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:32:59.468 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:32:59.468 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image8 2026-03-21T12:32:59.496 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image8 snapshot 2026-03-21T12:33:00.443 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:33:00.450 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image8 1m 2026-03-21T12:33:00.480 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:00.480 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:00.480 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image9 2026-03-21T12:33:00.508 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image9 snapshot 2026-03-21T12:33:01.451 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:33:01.457 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image9 1m 2026-03-21T12:33:01.488 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:01.488 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:01.488 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image10 2026-03-21T12:33:01.517 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image10 snapshot 2026-03-21T12:33:02.448 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:33:02.454 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image10 1m 2026-03-21T12:33:02.487 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:02.487 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:02.487 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image11 2026-03-21T12:33:02.516 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image11 snapshot 2026-03-21T12:33:03.452 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:33:03.458 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image11 1m 2026-03-21T12:33:03.489 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:03.489 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:03.489 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image12 2026-03-21T12:33:03.518 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image12 snapshot 2026-03-21T12:33:04.456 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:33:04.461 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image12 1m 2026-03-21T12:33:04.493 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:04.493 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:04.493 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image13 2026-03-21T12:33:04.522 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image13 snapshot 2026-03-21T12:33:05.460 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:33:05.465 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image13 1m 2026-03-21T12:33:05.496 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:05.496 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:05.496 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image14 2026-03-21T12:33:05.525 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image14 snapshot 2026-03-21T12:33:06.467 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:33:06.472 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image14 1m 2026-03-21T12:33:06.503 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:06.503 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:06.503 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image15 2026-03-21T12:33:06.532 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image15 snapshot 2026-03-21T12:33:07.469 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:33:07.474 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image15 1m 2026-03-21T12:33:07.507 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:07.507 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:07.507 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image16 2026-03-21T12:33:07.536 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image16 snapshot 2026-03-21T12:33:08.471 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:33:08.476 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image16 1m 2026-03-21T12:33:08.506 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:08.507 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:08.507 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image17 2026-03-21T12:33:08.739 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image17 snapshot 2026-03-21T12:33:09.476 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:33:09.481 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image17 1m 2026-03-21T12:33:09.513 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:09.513 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:09.513 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image18 2026-03-21T12:33:09.744 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image18 snapshot 2026-03-21T12:33:10.480 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:33:10.486 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image18 1m 2026-03-21T12:33:10.518 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:10.518 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:10.518 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image19 2026-03-21T12:33:10.548 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image19 snapshot 2026-03-21T12:33:11.484 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:33:11.489 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image19 1m 2026-03-21T12:33:11.522 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:11.522 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:11.522 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd create -s 1G --image-feature exclusive-lock rbd/image20 2026-03-21T12:33:11.552 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror image enable rbd/image20 snapshot 2026-03-21T12:33:12.489 INFO:tasks.workunit.client.0.vm01.stdout:Mirroring enabled 2026-03-21T12:33:12.494 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image20 1m 2026-03-21T12:33:12.527 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:12.527 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:12.527 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i = 1 )) 2026-03-21T12:33:12.527 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:12.527 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image1 2026-03-21T12:33:12.618 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd0 2026-03-21T12:33:12.619 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:12.619 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:12.619 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd0 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:12.619 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image2 2026-03-21T12:33:12.677 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd1 2026-03-21T12:33:12.677 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:12.677 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:12.677 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd1 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:12.677 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image3 2026-03-21T12:33:12.738 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd2 2026-03-21T12:33:12.738 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:12.738 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:12.738 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd2 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:12.738 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image4 2026-03-21T12:33:12.804 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd3 2026-03-21T12:33:12.804 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:12.804 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:12.804 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd3 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:12.804 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image5 2026-03-21T12:33:12.909 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd4 2026-03-21T12:33:12.909 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:12.909 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:12.909 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd4 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:12.909 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image6 2026-03-21T12:33:13.018 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd5 2026-03-21T12:33:13.019 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:13.019 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:13.019 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd5 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:13.019 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image7 2026-03-21T12:33:13.182 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd6 2026-03-21T12:33:13.182 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:13.182 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:13.182 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd6 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:13.182 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image8 2026-03-21T12:33:13.378 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd7 2026-03-21T12:33:13.379 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:13.379 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:13.379 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd7 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:13.379 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image9 2026-03-21T12:33:13.651 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd8 2026-03-21T12:33:13.651 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:13.651 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:13.651 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd8 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:13.651 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image10 2026-03-21T12:33:13.852 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd9 2026-03-21T12:33:13.853 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:13.853 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:13.853 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image11 2026-03-21T12:33:13.853 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd9 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:14.037 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd10 2026-03-21T12:33:14.038 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:14.038 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:14.038 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd10 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:14.040 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image12 2026-03-21T12:33:14.371 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd11 2026-03-21T12:33:14.371 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:14.371 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:14.371 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd11 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:14.373 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image13 2026-03-21T12:33:14.570 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd12 2026-03-21T12:33:14.570 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:14.570 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:14.570 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd12 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:14.570 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image14 2026-03-21T12:33:14.915 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd13 2026-03-21T12:33:14.916 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:14.916 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:14.917 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd13 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:14.924 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image15 2026-03-21T12:33:15.293 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd14 2026-03-21T12:33:15.293 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:15.293 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:15.293 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd14 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:15.293 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image16 2026-03-21T12:33:15.520 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd15 2026-03-21T12:33:15.521 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:15.521 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:15.521 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd15 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:15.521 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image17 2026-03-21T12:33:15.825 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd16 2026-03-21T12:33:15.825 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:15.825 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:15.825 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image18 2026-03-21T12:33:15.825 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd16 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:16.254 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd17 2026-03-21T12:33:16.254 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:16.254 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:16.255 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd17 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:16.255 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image19 2026-03-21T12:33:16.660 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd18 2026-03-21T12:33:16.660 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:16.660 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:16.660 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd18 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:16.684 INFO:tasks.workunit.client.0.vm01.stderr:++ sudo rbd device map rbd/image20 2026-03-21T12:33:16.966 INFO:tasks.workunit.client.0.vm01.stderr:+ DEVS[$i]=/dev/rbd19 2026-03-21T12:33:16.966 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T12:33:16.966 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 20 )) 2026-03-21T12:33:16.967 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:33:16.969 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096396 2026-03-21T12:33:16.969 INFO:tasks.workunit.client.0.vm01.stderr:+ END_TIME=1774099996 2026-03-21T12:33:16.969 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR= 2026-03-21T12:33:16.969 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR= 2026-03-21T12:33:16.969 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:33:16.969 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n '' ]] 2026-03-21T12:33:16.969 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:33:16.970 INFO:tasks.workunit.client.0.vm01.stderr:+ fio --name=fiotest --filename=/dev/rbd19 --rw=randrw --bs=4K --direct=1 --ioengine=libaio --iodepth=2 --runtime=43200 --time_based 2026-03-21T12:33:26.971 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:33:26.971 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:33:26.971 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:33:26.987 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:33:27.456 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1743657854 2026-03-21T12:33:27.456 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:33:27.456 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096407 2026-03-21T12:33:27.457 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:33:27.457 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1743657854 ]] 2026-03-21T12:33:27.457 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1743657854 != '' ]] 2026-03-21T12:33:27.457 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/1743657854 2026-03-21T12:33:29.433 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/1743657854 until 2026-03-21T13:33:28.540738+0000 (3600 sec) 2026-03-21T12:33:29.452 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:33:29.453 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/1743657854 2026-03-21T12:33:29.979 INFO:tasks.workunit.client.0.vm01.stderr:listed 1 entries 2026-03-21T12:33:29.998 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/1743657854 2026-03-21T12:33:29.998 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:33:40.013 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:33:40.019 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:33:40.019 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:33:40.027 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:33:40.404 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1743657854 2026-03-21T12:33:40.409 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:33:40.410 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096420 2026-03-21T12:33:40.410 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:33:40.410 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1743657854 ]] 2026-03-21T12:33:40.410 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1743657854 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\7\4\3\6\5\7\8\5\4 ]] 2026-03-21T12:33:40.410 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:33:50.411 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:33:50.412 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:33:50.412 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:33:50.415 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:33:51.075 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1743657854 2026-03-21T12:33:51.075 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:33:51.077 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096431 2026-03-21T12:33:51.077 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:33:51.078 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1743657854 ]] 2026-03-21T12:33:51.078 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1743657854 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\7\4\3\6\5\7\8\5\4 ]] 2026-03-21T12:33:51.078 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:34:01.083 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:34:01.083 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:34:01.095 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:34:01.095 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:34:01.863 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1743657854 2026-03-21T12:34:01.863 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:34:01.864 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096441 2026-03-21T12:34:01.864 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:34:01.864 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1743657854 ]] 2026-03-21T12:34:01.864 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1743657854 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\7\4\3\6\5\7\8\5\4 ]] 2026-03-21T12:34:01.864 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:34:11.869 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:34:11.871 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:34:11.877 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:34:11.888 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:34:12.343 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/4182998574 2026-03-21T12:34:12.343 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:34:12.352 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096452 2026-03-21T12:34:12.352 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:34:12.352 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/4182998574 ]] 2026-03-21T12:34:12.352 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/4182998574 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\7\4\3\6\5\7\8\5\4 ]] 2026-03-21T12:34:12.352 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/4182998574 2026-03-21T12:34:13.834 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/4182998574 until 2026-03-21T13:34:12.940206+0000 (3600 sec) 2026-03-21T12:34:13.865 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:34:13.865 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/4182998574 2026-03-21T12:34:14.346 INFO:tasks.workunit.client.0.vm01.stderr:listed 2 entries 2026-03-21T12:34:14.364 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/4182998574 2026-03-21T12:34:14.364 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:34:24.367 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:34:24.368 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:34:24.368 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:34:24.379 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:34:25.073 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/4182998574 2026-03-21T12:34:25.073 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:34:25.074 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096465 2026-03-21T12:34:25.074 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:34:25.074 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/4182998574 ]] 2026-03-21T12:34:25.074 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/4182998574 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\4\1\8\2\9\9\8\5\7\4 ]] 2026-03-21T12:34:25.074 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:34:35.083 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:34:35.083 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:34:35.099 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:34:35.099 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:34:35.591 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/4182998574 2026-03-21T12:34:35.591 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:34:35.592 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096475 2026-03-21T12:34:35.592 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:34:35.592 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/4182998574 ]] 2026-03-21T12:34:35.592 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/4182998574 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\4\1\8\2\9\9\8\5\7\4 ]] 2026-03-21T12:34:35.592 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:34:45.596 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:34:45.598 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:34:45.600 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:34:45.608 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:34:46.427 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/4182998574 2026-03-21T12:34:46.427 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:34:46.428 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096486 2026-03-21T12:34:46.428 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:34:46.428 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/4182998574 ]] 2026-03-21T12:34:46.428 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/4182998574 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\4\1\8\2\9\9\8\5\7\4 ]] 2026-03-21T12:34:46.428 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:34:56.430 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:34:56.433 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:34:56.438 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:34:56.445 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:34:57.071 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/4182998574 2026-03-21T12:34:57.075 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:34:57.076 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096497 2026-03-21T12:34:57.076 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:34:57.076 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/4182998574 ]] 2026-03-21T12:34:57.076 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/4182998574 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\4\1\8\2\9\9\8\5\7\4 ]] 2026-03-21T12:34:57.076 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:35:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.049+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.049+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.049+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.049+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.049+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.049+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.049+0000 7efe2af01640 -1 librbd::ImageState: 0x55e99214a780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.049+0000 7efe2af01640 -1 librbd::ImageState: 0x55e99214ba80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.049+0000 7efe2af01640 -1 librbd::ImageState: 0x55e99214bb00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.049+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9912cc500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.049+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9912cda80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.049+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9912cd800 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.059 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.057+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.059 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.057+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.059 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.057+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9912ccb00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.059 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.057+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9912cc300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.062 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.057+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.062 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.057+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9912cc980 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.066 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.061+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.066 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.061+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9912cd980 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.068 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.061+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.068 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.061+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.068 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.061+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.061+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.061+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.065+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9912cc200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.065+0000 7efe2af01640 -1 librbd::ImageState: 0x55e99214bb00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.065+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9912cc500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.065+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9912cda80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.065+0000 7efe2a700640 -1 librbd::ImageState: 0x55e99214ba80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.065+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.065+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9912cd800 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.065+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.065+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.065+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9912ccb00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.065+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9912cc300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.071 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.069+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.071 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.069+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9912cc980 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.072 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.069+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:00.072 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:35:00.069+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9912cd980 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:35:07.077 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:35:07.081 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:35:07.095 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:35:07.104 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:35:07.675 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/4182998574 2026-03-21T12:35:07.686 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:35:07.689 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096507 2026-03-21T12:35:07.689 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:35:07.689 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/4182998574 ]] 2026-03-21T12:35:07.689 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/4182998574 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\4\1\8\2\9\9\8\5\7\4 ]] 2026-03-21T12:35:07.689 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:35:17.701 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:35:17.703 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:35:17.708 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:35:17.721 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:35:18.257 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/361151311 2026-03-21T12:35:18.258 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:35:18.258 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096518 2026-03-21T12:35:18.258 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:35:18.259 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/361151311 ]] 2026-03-21T12:35:18.259 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/361151311 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\4\1\8\2\9\9\8\5\7\4 ]] 2026-03-21T12:35:18.259 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/361151311 2026-03-21T12:35:19.538 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/361151311 until 2026-03-21T13:35:18.605566+0000 (3600 sec) 2026-03-21T12:35:19.564 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:35:19.564 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/361151311 2026-03-21T12:35:20.095 INFO:tasks.workunit.client.0.vm01.stderr:listed 3 entries 2026-03-21T12:35:20.150 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/361151311 2026-03-21T12:35:20.150 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:35:30.153 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:35:30.157 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:35:30.162 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:35:30.162 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:35:30.892 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/361151311 2026-03-21T12:35:30.892 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:35:30.893 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096530 2026-03-21T12:35:30.893 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:35:30.893 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/361151311 ]] 2026-03-21T12:35:30.893 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/361151311 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\6\1\1\5\1\3\1\1 ]] 2026-03-21T12:35:30.893 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:35:40.902 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:35:40.907 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:35:40.914 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:35:40.923 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:35:41.548 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/361151311 2026-03-21T12:35:41.548 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:35:41.550 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096541 2026-03-21T12:35:41.550 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:35:41.550 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/361151311 ]] 2026-03-21T12:35:41.550 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/361151311 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\6\1\1\5\1\3\1\1 ]] 2026-03-21T12:35:41.550 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:35:51.552 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:35:51.553 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:35:51.560 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:35:51.564 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:35:52.199 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/361151311 2026-03-21T12:35:52.209 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:35:52.228 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096552 2026-03-21T12:35:52.228 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:35:52.228 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/361151311 ]] 2026-03-21T12:35:52.228 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/361151311 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\6\1\1\5\1\3\1\1 ]] 2026-03-21T12:35:52.228 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:36:00.019 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398980 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398a80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398d80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398e80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398f80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.017+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992399080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.025+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.025+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.034 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.029+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.034 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.029+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.034 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.029+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.034 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.029+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.034 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.029+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.034 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.029+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.034 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.029+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398980 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.034 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.029+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398a80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.034 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.029+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.034 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.029+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.041+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.041+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.041+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.041+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.041+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398d80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.041+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398e80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.041+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398f80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:36:00.041+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992399080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:36:02.237 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:36:02.241 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:36:02.244 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:36:02.244 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x5605781ec680 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x5605789129c0 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x560578d76ea0 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x560578fdc680 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x560578fdd110 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x560579404820 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x560579668000 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x560579904f70 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1520 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1ba0 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x560579c57ba0 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x560579cd1ad0 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x56057a14ed00 2026-03-21T12:36:02.389 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:02.385+0000 7f5cde659640 -1 reset not still connected to 0x56057a6c7930 2026-03-21T12:36:02.807 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/361151311 2026-03-21T12:36:02.807 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:36:02.808 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096562 2026-03-21T12:36:02.808 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:36:02.808 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/361151311 ]] 2026-03-21T12:36:02.808 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/361151311 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\6\1\1\5\1\3\1\1 ]] 2026-03-21T12:36:02.808 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x5605781ec680 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x5605789129c0 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x560578d76ea0 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x560578fdc680 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x560578fdd110 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x560579404820 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x560579668000 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x560579904f70 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1520 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1ba0 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x560579c57ba0 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x560579cd1ad0 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x56057a14ed00 2026-03-21T12:36:04.157 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:36:04.153+0000 7f5cde659640 -1 reset not still connected to 0x56057a6c7930 2026-03-21T12:36:12.812 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:36:12.812 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:36:12.816 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:36:12.820 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:36:13.316 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/361151311 2026-03-21T12:36:13.321 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:36:13.323 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096573 2026-03-21T12:36:13.323 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:36:13.323 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/361151311 ]] 2026-03-21T12:36:13.323 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/361151311 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\6\1\1\5\1\3\1\1 ]] 2026-03-21T12:36:13.323 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:36:23.332 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:36:23.332 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:36:23.332 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:36:23.344 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:36:23.894 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/361151311 2026-03-21T12:36:23.894 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:36:23.897 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096583 2026-03-21T12:36:23.897 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:36:23.897 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/361151311 ]] 2026-03-21T12:36:23.897 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/361151311 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\6\1\1\5\1\3\1\1 ]] 2026-03-21T12:36:23.897 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:36:33.897 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:36:33.902 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:36:33.914 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:36:33.935 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:36:34.408 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1632011843 2026-03-21T12:36:34.408 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:36:34.411 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096594 2026-03-21T12:36:34.411 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:36:34.412 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1632011843 ]] 2026-03-21T12:36:34.412 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1632011843 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\6\1\1\5\1\3\1\1 ]] 2026-03-21T12:36:34.412 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/1632011843 2026-03-21T12:36:36.698 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/1632011843 until 2026-03-21T13:36:35.812945+0000 (3600 sec) 2026-03-21T12:36:36.736 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:36:36.739 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/1632011843 2026-03-21T12:36:37.223 INFO:tasks.workunit.client.0.vm01.stderr:listed 4 entries 2026-03-21T12:36:37.263 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/1632011843 2026-03-21T12:36:37.263 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:36:47.266 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:36:47.266 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:36:47.266 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:36:47.288 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:36:48.043 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1632011843 2026-03-21T12:36:48.045 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:36:48.046 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096608 2026-03-21T12:36:48.046 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:36:48.046 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1632011843 ]] 2026-03-21T12:36:48.046 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1632011843 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\6\3\2\0\1\1\8\4\3 ]] 2026-03-21T12:36:48.046 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:36:58.053 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:36:58.053 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:36:58.061 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:36:58.071 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:36:58.627 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1632011843 2026-03-21T12:36:58.628 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:36:58.628 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096618 2026-03-21T12:36:58.628 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:36:58.628 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1632011843 ]] 2026-03-21T12:36:58.628 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1632011843 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\6\3\2\0\1\1\8\4\3 ]] 2026-03-21T12:36:58.628 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:37:00.026 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.017+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.026 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.017+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.017+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.017+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992540000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.017+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992540100 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.017+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992540200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.017+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.017+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992540300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.017+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.017+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992540400 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992540500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.021+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.021+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992540600 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.021+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.021+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992540700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.021+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.021+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992540800 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.025+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.025+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992540900 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.025+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.025+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992540a00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.043 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.037+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.043 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.037+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992540000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.043 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.037+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.043 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.037+0000 7efe2a700640 -1 librbd::ImageState: 0x55e99214b900 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.041+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.041+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992398f80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.041+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.041+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992540180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.054 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.049+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.054 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.049+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992540280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.054 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.049+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.054 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.049+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992540380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.054 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.049+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.054 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.049+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992540480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.063 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.061+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.063 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.061+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992540580 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.065 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.061+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:00.065 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:37:00.061+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992540680 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:37:08.637 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:37:08.641 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:37:08.651 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:37:08.663 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:37:09.135 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1632011843 2026-03-21T12:37:09.135 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:37:09.141 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096629 2026-03-21T12:37:09.141 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:37:09.141 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1632011843 ]] 2026-03-21T12:37:09.141 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1632011843 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\6\3\2\0\1\1\8\4\3 ]] 2026-03-21T12:37:09.141 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:37:19.146 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:37:19.146 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:37:19.163 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:37:19.171 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:37:19.619 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1632011843 2026-03-21T12:37:19.619 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:37:19.620 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096639 2026-03-21T12:37:19.620 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:37:19.620 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1632011843 ]] 2026-03-21T12:37:19.620 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1632011843 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\6\3\2\0\1\1\8\4\3 ]] 2026-03-21T12:37:19.620 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:37:29.631 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:37:29.632 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:37:29.632 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:37:29.640 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:37:30.245 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1632011843 2026-03-21T12:37:30.257 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:37:30.258 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096650 2026-03-21T12:37:30.258 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:37:30.258 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1632011843 ]] 2026-03-21T12:37:30.258 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1632011843 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\6\3\2\0\1\1\8\4\3 ]] 2026-03-21T12:37:30.258 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:37:40.266 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:37:40.270 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:37:40.272 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:37:40.291 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:37:40.878 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/4234591622 2026-03-21T12:37:40.878 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:37:40.879 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096660 2026-03-21T12:37:40.879 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:37:40.879 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/4234591622 ]] 2026-03-21T12:37:40.879 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/4234591622 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\6\3\2\0\1\1\8\4\3 ]] 2026-03-21T12:37:40.879 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/4234591622 2026-03-21T12:37:43.155 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/4234591622 until 2026-03-21T13:37:42.219911+0000 (3600 sec) 2026-03-21T12:37:43.171 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:37:43.171 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/4234591622 2026-03-21T12:37:43.703 INFO:tasks.workunit.client.0.vm01.stderr:listed 5 entries 2026-03-21T12:37:43.723 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/4234591622 2026-03-21T12:37:43.723 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:37:53.724 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:37:53.724 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:37:53.727 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:37:53.743 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:37:54.137 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/4234591622 2026-03-21T12:37:54.137 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:37:54.139 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096674 2026-03-21T12:37:54.139 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:37:54.139 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/4234591622 ]] 2026-03-21T12:37:54.139 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/4234591622 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\4\2\3\4\5\9\1\6\2\2 ]] 2026-03-21T12:37:54.139 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:38:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.001+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.001+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.001+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.001+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6580 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6600 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6800 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6900 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.009+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6a00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.021 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.017+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.021 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.017+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6b00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.026 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.026 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992541380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992541480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.025+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.025+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992541580 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.031 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.025+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.031 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.025+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992541680 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.029+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.029+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992541780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.033 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.029+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.033 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.029+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992541880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.035 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.029+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.035 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.029+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992541980 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.043 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.037+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.043 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.037+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992541400 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.063 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.057+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:00.063 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:38:00.057+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992541380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:38:04.148 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:38:04.149 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:38:04.154 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:38:04.159 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:38:04.701 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/4234591622 2026-03-21T12:38:04.701 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:38:04.705 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096684 2026-03-21T12:38:04.705 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:38:04.706 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/4234591622 ]] 2026-03-21T12:38:04.706 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/4234591622 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\4\2\3\4\5\9\1\6\2\2 ]] 2026-03-21T12:38:04.706 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:38:14.713 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:38:14.716 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:38:14.717 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:38:14.719 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:38:15.432 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/4234591622 2026-03-21T12:38:15.432 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:38:15.433 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096695 2026-03-21T12:38:15.433 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:38:15.433 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/4234591622 ]] 2026-03-21T12:38:15.433 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/4234591622 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\4\2\3\4\5\9\1\6\2\2 ]] 2026-03-21T12:38:15.433 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:38:25.434 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:38:25.435 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:38:25.437 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:38:25.440 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:38:25.908 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/4234591622 2026-03-21T12:38:25.908 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:38:25.909 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096705 2026-03-21T12:38:25.909 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:38:25.909 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/4234591622 ]] 2026-03-21T12:38:25.909 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/4234591622 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\4\2\3\4\5\9\1\6\2\2 ]] 2026-03-21T12:38:25.909 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:38:35.920 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:38:35.922 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:38:35.926 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:38:35.928 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:38:36.510 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1237631947 2026-03-21T12:38:36.510 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:38:36.511 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096716 2026-03-21T12:38:36.511 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:38:36.511 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1237631947 ]] 2026-03-21T12:38:36.511 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1237631947 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\4\2\3\4\5\9\1\6\2\2 ]] 2026-03-21T12:38:36.511 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/1237631947 2026-03-21T12:38:37.978 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/1237631947 until 2026-03-21T13:38:37.067639+0000 (3600 sec) 2026-03-21T12:38:37.993 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:38:37.993 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/1237631947 2026-03-21T12:38:38.509 INFO:tasks.workunit.client.0.vm01.stderr:listed 6 entries 2026-03-21T12:38:38.527 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/1237631947 2026-03-21T12:38:38.527 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:38:39.524 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:38:39.521+0000 7f5cde659640 -1 reset not still connected to 0x5605781ec680 2026-03-21T12:38:39.524 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:38:39.521+0000 7f5cde659640 -1 reset not still connected to 0x5605789129c0 2026-03-21T12:38:39.524 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:38:39.521+0000 7f5cde659640 -1 reset not still connected to 0x560578fdc680 2026-03-21T12:38:39.524 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:38:39.521+0000 7f5cde659640 -1 reset not still connected to 0x560578fdd110 2026-03-21T12:38:39.524 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:38:39.521+0000 7f5cde659640 -1 reset not still connected to 0x560579404820 2026-03-21T12:38:39.532 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:38:39.521+0000 7f5cde659640 -1 reset not still connected to 0x560579668000 2026-03-21T12:38:39.532 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:38:39.521+0000 7f5cde659640 -1 reset not still connected to 0x560579904f70 2026-03-21T12:38:39.532 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:38:39.521+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1520 2026-03-21T12:38:39.535 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:38:39.529+0000 7f5cde659640 -1 reset not still connected to 0x560579c57ba0 2026-03-21T12:38:39.535 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:38:39.529+0000 7f5cde659640 -1 reset not still connected to 0x56057a6c7930 2026-03-21T12:38:48.533 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:38:48.533 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:38:48.544 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:38:48.556 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:38:49.073 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1237631947 2026-03-21T12:38:49.073 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:38:49.074 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096729 2026-03-21T12:38:49.074 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:38:49.074 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1237631947 ]] 2026-03-21T12:38:49.074 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1237631947 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\2\3\7\6\3\1\9\4\7 ]] 2026-03-21T12:38:49.074 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:38:59.097 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:38:59.098 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:38:59.100 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:38:59.111 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:38:59.634 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1237631947 2026-03-21T12:38:59.634 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:38:59.640 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096739 2026-03-21T12:38:59.640 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:38:59.640 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1237631947 ]] 2026-03-21T12:38:59.640 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1237631947 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\2\3\7\6\3\1\9\4\7 ]] 2026-03-21T12:38:59.640 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:39:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.017+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.017+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.017+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.017+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.017+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6e80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.017+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6c00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.017+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a7700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.017+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9926a6d00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.033+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.033+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9926a7800 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.033+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.033+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9926a7900 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.033+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.033+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a7a00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.045+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.045+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.045+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a7b00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.045+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9926a6300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.049 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.045+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.049 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.045+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6400 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.052 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.049+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.052 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.049+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.055 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.053+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.055 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.053+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9926a6f80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.053+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.053+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a7080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.064 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.061+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.064 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.061+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.064 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.061+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a7180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.064 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.061+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9926a7280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.064 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.061+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.064 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.061+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a7380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.064 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.061+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.064 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.061+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.064 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.061+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a7480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.065 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.061+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9926a6e80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.065 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.061+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.065 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.061+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6c00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.070 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.065+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:00.070 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:39:00.065+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a7700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:39:05.056 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T12:39:05.053+0000 7f4e5cdc1640 -1 reset not still connected to 0x5565ef75d380 2026-03-21T12:39:09.641 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:39:09.644 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:39:09.644 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:39:09.647 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:39:10.219 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1237631947 2026-03-21T12:39:10.224 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:39:10.225 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096750 2026-03-21T12:39:10.225 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:39:10.225 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1237631947 ]] 2026-03-21T12:39:10.225 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1237631947 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\2\3\7\6\3\1\9\4\7 ]] 2026-03-21T12:39:10.225 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:39:17.890 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:39:17.885+0000 7f5cde659640 -1 reset not still connected to 0x5605781ec680 2026-03-21T12:39:17.890 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:39:17.885+0000 7f5cde659640 -1 reset not still connected to 0x5605789129c0 2026-03-21T12:39:17.892 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:39:17.889+0000 7f5cde659640 -1 reset not still connected to 0x560578d76ea0 2026-03-21T12:39:17.892 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:39:17.889+0000 7f5cde659640 -1 reset not still connected to 0x560578fdc680 2026-03-21T12:39:17.892 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:39:17.889+0000 7f5cde659640 -1 reset not still connected to 0x560578fdd110 2026-03-21T12:39:17.892 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:39:17.889+0000 7f5cde659640 -1 reset not still connected to 0x560579404820 2026-03-21T12:39:17.892 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:39:17.889+0000 7f5cde659640 -1 reset not still connected to 0x560579668000 2026-03-21T12:39:17.892 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:39:17.889+0000 7f5cde659640 -1 reset not still connected to 0x560579904f70 2026-03-21T12:39:17.892 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:39:17.889+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1520 2026-03-21T12:39:17.892 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:39:17.889+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1ba0 2026-03-21T12:39:17.892 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:39:17.889+0000 7f5cde659640 -1 reset not still connected to 0x560579c57ba0 2026-03-21T12:39:17.892 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:39:17.889+0000 7f5cde659640 -1 reset not still connected to 0x56057a14ed00 2026-03-21T12:39:17.892 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:39:17.889+0000 7f5cde659640 -1 reset not still connected to 0x56057a6c7930 2026-03-21T12:39:20.227 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:39:20.227 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:39:20.227 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:39:20.262 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:39:20.656 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1237631947 2026-03-21T12:39:20.657 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:39:20.664 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096760 2026-03-21T12:39:20.664 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:39:20.664 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1237631947 ]] 2026-03-21T12:39:20.664 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1237631947 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\2\3\7\6\3\1\9\4\7 ]] 2026-03-21T12:39:20.664 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:39:30.667 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:39:30.668 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:39:30.682 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:39:30.691 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:39:31.357 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1237631947 2026-03-21T12:39:31.357 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:39:31.358 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096771 2026-03-21T12:39:31.358 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:39:31.358 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1237631947 ]] 2026-03-21T12:39:31.358 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1237631947 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\2\3\7\6\3\1\9\4\7 ]] 2026-03-21T12:39:31.358 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:39:41.360 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:39:41.362 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:39:41.362 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:39:41.371 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:39:41.888 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1237631947 2026-03-21T12:39:41.891 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:39:41.891 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096781 2026-03-21T12:39:41.891 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:39:41.891 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1237631947 ]] 2026-03-21T12:39:41.891 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1237631947 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\2\3\7\6\3\1\9\4\7 ]] 2026-03-21T12:39:41.891 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:39:51.893 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:39:51.893 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:39:51.899 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:39:51.907 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:39:52.372 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3811815956 2026-03-21T12:39:52.372 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:39:52.375 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096792 2026-03-21T12:39:52.375 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:39:52.375 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3811815956 ]] 2026-03-21T12:39:52.375 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3811815956 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\2\3\7\6\3\1\9\4\7 ]] 2026-03-21T12:39:52.375 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/3811815956 2026-03-21T12:39:54.702 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/3811815956 until 2026-03-21T13:39:53.835145+0000 (3600 sec) 2026-03-21T12:39:54.715 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:39:54.715 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/3811815956 2026-03-21T12:39:55.134 INFO:tasks.workunit.client.0.vm01.stderr:listed 7 entries 2026-03-21T12:39:55.181 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/3811815956 2026-03-21T12:39:55.181 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:40:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991bd4000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991bd5880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991bd4780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e99214bd80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e99214b900 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992398500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992541c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992541380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e99280e380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e98fd58f00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991bd4000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.025+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.025+0000 7efe2af01640 -1 librbd::ImageState: 0x55e98fd59600 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.039 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.037+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.039 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.037+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a6c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.065 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.061+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.065 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.061+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a7680 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.066 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.061+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.066 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.061+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a7780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.067 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.065+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.067 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.065+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992398680 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.068 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.065+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.068 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.065+0000 7efe2b702640 -1 librbd::ImageState: 0x55e99214bd80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.065+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.065+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991cdbe00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.077 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.073+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.077 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.073+0000 7efe2b702640 -1 librbd::ImageState: 0x55e9926a6b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.078 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.073+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:00.079 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:40:00.073+0000 7efe2af01640 -1 librbd::ImageState: 0x55e98fd58c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:40:05.210 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:40:05.227 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:40:05.227 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:40:05.228 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:40:05.787 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3811815956 2026-03-21T12:40:05.787 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:40:05.788 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096805 2026-03-21T12:40:05.788 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:40:05.788 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3811815956 ]] 2026-03-21T12:40:05.788 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3811815956 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\8\1\1\8\1\5\9\5\6 ]] 2026-03-21T12:40:05.788 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:40:15.801 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:40:15.801 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:40:15.809 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:40:15.810 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:40:16.347 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3811815956 2026-03-21T12:40:16.347 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:40:16.347 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096816 2026-03-21T12:40:16.347 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:40:16.347 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3811815956 ]] 2026-03-21T12:40:16.347 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3811815956 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\8\1\1\8\1\5\9\5\6 ]] 2026-03-21T12:40:16.347 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:40:26.350 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:40:26.350 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:40:26.350 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:40:26.387 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:40:27.030 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3811815956 2026-03-21T12:40:27.030 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:40:27.031 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096827 2026-03-21T12:40:27.031 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:40:27.031 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3811815956 ]] 2026-03-21T12:40:27.031 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3811815956 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\8\1\1\8\1\5\9\5\6 ]] 2026-03-21T12:40:27.031 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:40:37.032 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:40:37.034 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:40:37.035 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:40:37.035 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:40:37.593 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3811815956 2026-03-21T12:40:37.593 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:40:37.594 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096837 2026-03-21T12:40:37.594 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:40:37.594 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3811815956 ]] 2026-03-21T12:40:37.594 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3811815956 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\8\1\1\8\1\5\9\5\6 ]] 2026-03-21T12:40:37.594 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:40:47.597 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:40:47.603 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:40:47.603 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:40:47.603 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:40:48.145 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3811815956 2026-03-21T12:40:48.145 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:40:48.146 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096848 2026-03-21T12:40:48.146 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:40:48.146 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3811815956 ]] 2026-03-21T12:40:48.146 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3811815956 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\8\1\1\8\1\5\9\5\6 ]] 2026-03-21T12:40:48.146 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:40:58.149 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:40:58.149 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:40:58.150 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:40:58.159 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:40:58.692 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3757325002 2026-03-21T12:40:58.692 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:40:58.700 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096858 2026-03-21T12:40:58.700 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:40:58.700 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3757325002 ]] 2026-03-21T12:40:58.700 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3757325002 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\8\1\1\8\1\5\9\5\6 ]] 2026-03-21T12:40:58.700 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/3757325002 2026-03-21T12:41:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.009+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.009+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.009+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.009+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991c07680 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.009+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991c07580 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.009+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991c07780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.009+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.009+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991c07880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.013+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.013+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991c07980 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.017+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.017+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.017+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991c07b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.017+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.017+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991c07a80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.017+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991c07c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.021+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.021+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991c07d80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.037 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.021+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.037 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.021+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991c07e80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.037 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.029+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.037 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.029+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991c07f80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.041+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.041+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9928a0080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.041+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.041+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.041+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.041+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9928a0280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.041+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9928a0380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.041+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9928a0180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.041+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.041+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9928a0480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.077 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.073+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.078 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.073+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9928a0580 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.078 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.073+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.078 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.073+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9928a0680 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.081+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.081+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.081+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9928a0780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:41:00.081+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9928a0880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:41:00.205 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/3757325002 until 2026-03-21T13:40:59.301530+0000 (3600 sec) 2026-03-21T12:41:00.246 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/3757325002 2026-03-21T12:41:00.256 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:41:00.603 INFO:tasks.workunit.client.0.vm01.stderr:listed 8 entries 2026-03-21T12:41:00.620 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/3757325002 2026-03-21T12:41:00.620 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:41:09.153 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:41:09.149+0000 7f5cde659640 -1 reset not still connected to 0x5605781ec680 2026-03-21T12:41:09.153 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:41:09.149+0000 7f5cde659640 -1 reset not still connected to 0x5605789129c0 2026-03-21T12:41:09.154 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:41:09.149+0000 7f5cde659640 -1 reset not still connected to 0x560578fdc680 2026-03-21T12:41:09.155 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:41:09.149+0000 7f5cde659640 -1 reset not still connected to 0x560578fdd110 2026-03-21T12:41:09.155 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:41:09.149+0000 7f5cde659640 -1 reset not still connected to 0x560579404820 2026-03-21T12:41:09.155 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:41:09.149+0000 7f5cde659640 -1 reset not still connected to 0x560579904f70 2026-03-21T12:41:09.155 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:41:09.149+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1520 2026-03-21T12:41:09.155 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:41:09.149+0000 7f5cde659640 -1 reset not still connected to 0x560579c57ba0 2026-03-21T12:41:09.155 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:41:09.149+0000 7f5cde659640 -1 reset not still connected to 0x560579cd1ad0 2026-03-21T12:41:09.156 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:41:09.149+0000 7f5cde659640 -1 reset not still connected to 0x56057a14ed00 2026-03-21T12:41:09.156 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:41:09.149+0000 7f5cde659640 -1 reset not still connected to 0x56057a6c7930 2026-03-21T12:41:10.634 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:41:10.634 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:41:10.636 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:41:10.655 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:41:11.291 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3757325002 2026-03-21T12:41:11.291 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:41:11.292 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096871 2026-03-21T12:41:11.292 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:41:11.292 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3757325002 ]] 2026-03-21T12:41:11.292 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3757325002 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\7\5\7\3\2\5\0\0\2 ]] 2026-03-21T12:41:11.292 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:41:21.300 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:41:21.303 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:41:21.303 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:41:21.313 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:41:21.780 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3757325002 2026-03-21T12:41:21.780 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:41:21.787 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096881 2026-03-21T12:41:21.787 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:41:21.787 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3757325002 ]] 2026-03-21T12:41:21.787 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3757325002 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\7\5\7\3\2\5\0\0\2 ]] 2026-03-21T12:41:21.787 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:41:31.787 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:41:31.789 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:41:31.793 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:41:31.799 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:41:32.285 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3757325002 2026-03-21T12:41:32.286 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:41:32.286 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096892 2026-03-21T12:41:32.286 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:41:32.286 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3757325002 ]] 2026-03-21T12:41:32.286 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3757325002 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\7\5\7\3\2\5\0\0\2 ]] 2026-03-21T12:41:32.286 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:41:42.291 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:41:42.291 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:41:42.291 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:41:42.302 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:41:42.841 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3757325002 2026-03-21T12:41:42.841 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:41:42.842 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096902 2026-03-21T12:41:42.842 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:41:42.842 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3757325002 ]] 2026-03-21T12:41:42.842 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3757325002 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\7\5\7\3\2\5\0\0\2 ]] 2026-03-21T12:41:42.842 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:41:52.844 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:41:52.855 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:41:52.858 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:41:52.866 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:41:53.419 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3757325002 2026-03-21T12:41:53.424 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:41:53.426 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096913 2026-03-21T12:41:53.426 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:41:53.426 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3757325002 ]] 2026-03-21T12:41:53.426 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3757325002 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\7\5\7\3\2\5\0\0\2 ]] 2026-03-21T12:41:53.426 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:42:03.428 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:42:03.429 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:42:03.431 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:42:03.435 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:42:03.945 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/306406911 2026-03-21T12:42:03.945 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:42:03.945 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096923 2026-03-21T12:42:03.945 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:42:03.945 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/306406911 ]] 2026-03-21T12:42:03.945 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/306406911 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\7\5\7\3\2\5\0\0\2 ]] 2026-03-21T12:42:03.945 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/306406911 2026-03-21T12:42:06.311 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/306406911 until 2026-03-21T13:42:05.387616+0000 (3600 sec) 2026-03-21T12:42:06.338 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:42:06.338 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/306406911 2026-03-21T12:42:06.983 INFO:tasks.workunit.client.0.vm01.stderr:listed 9 entries 2026-03-21T12:42:06.996 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/306406911 2026-03-21T12:42:06.996 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:42:16.999 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:42:16.999 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:42:17.027 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:42:17.030 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:42:17.532 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/306406911 2026-03-21T12:42:17.534 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:42:17.535 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096937 2026-03-21T12:42:17.535 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:42:17.535 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/306406911 ]] 2026-03-21T12:42:17.535 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/306406911 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\0\6\4\0\6\9\1\1 ]] 2026-03-21T12:42:17.535 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:42:27.554 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:42:27.556 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:42:27.556 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:42:27.565 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:42:28.147 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/306406911 2026-03-21T12:42:28.153 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:42:28.155 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096948 2026-03-21T12:42:28.155 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:42:28.155 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/306406911 ]] 2026-03-21T12:42:28.155 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/306406911 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\0\6\4\0\6\9\1\1 ]] 2026-03-21T12:42:28.155 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:42:38.163 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:42:38.165 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:42:38.165 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:42:38.190 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:42:38.415 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x5605781ec680 2026-03-21T12:42:38.416 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x5605789129c0 2026-03-21T12:42:38.416 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x560578d76ea0 2026-03-21T12:42:38.416 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x560578fdc680 2026-03-21T12:42:38.416 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x560578fdd110 2026-03-21T12:42:38.416 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x560579404820 2026-03-21T12:42:38.416 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x560579668000 2026-03-21T12:42:38.416 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x560579904f70 2026-03-21T12:42:38.416 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1520 2026-03-21T12:42:38.416 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1ba0 2026-03-21T12:42:38.416 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x560579c57ba0 2026-03-21T12:42:38.416 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x560579cd1ad0 2026-03-21T12:42:38.416 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x56057a14ed00 2026-03-21T12:42:38.416 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:42:38.414+0000 7f5cde659640 -1 reset not still connected to 0x56057a6c7930 2026-03-21T12:42:38.710 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/306406911 2026-03-21T12:42:38.710 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:42:38.711 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096958 2026-03-21T12:42:38.711 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:42:38.711 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/306406911 ]] 2026-03-21T12:42:38.711 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/306406911 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\0\6\4\0\6\9\1\1 ]] 2026-03-21T12:42:38.711 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:42:48.717 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:42:48.718 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:42:48.718 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:42:48.737 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:42:49.278 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/306406911 2026-03-21T12:42:49.278 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:42:49.280 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096969 2026-03-21T12:42:49.280 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:42:49.280 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/306406911 ]] 2026-03-21T12:42:49.280 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/306406911 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\0\6\4\0\6\9\1\1 ]] 2026-03-21T12:42:49.280 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:42:59.299 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:42:59.299 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:42:59.307 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:42:59.310 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:42:59.781 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/306406911 2026-03-21T12:42:59.783 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:42:59.784 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096979 2026-03-21T12:42:59.784 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:42:59.784 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/306406911 ]] 2026-03-21T12:42:59.784 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/306406911 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\0\6\4\0\6\9\1\1 ]] 2026-03-21T12:42:59.784 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:43:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.006+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.006+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991c07b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.011 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.006+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.011 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.006+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991c06200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.010+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.010+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991c07080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.021 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.018+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.021 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.018+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991c07180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.022+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.022+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.022+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991c07280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.022+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992398480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.030 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.026+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.030 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.026+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991c07380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.030 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.026+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.030 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.026+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991c07480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.030+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.030+0000 7efe2b702640 -1 librbd::ImageState: 0x55e99280fc80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.042+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.042+0000 7efe2b702640 -1 librbd::ImageState: 0x55e99280e700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.050 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.046+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.050 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.046+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992398700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.053 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.050+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.053 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.050+0000 7efe2b702640 -1 librbd::ImageState: 0x55e99280e200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.055 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.050+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.055 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.050+0000 7efe2b702640 -1 librbd::ImageState: 0x55e99280e480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.095 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.090+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.095 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.090+0000 7efe2b702640 -1 librbd::ImageState: 0x55e99280fd80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.107 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.102+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.107 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.102+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.107 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.102+0000 7efe2b702640 -1 librbd::ImageState: 0x55e9912cc000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.107 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.102+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991781f80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.111 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.106+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.111 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.106+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991780180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.113 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.110+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.113 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.110+0000 7efe2b702640 -1 librbd::ImageState: 0x55e99280fa80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.113 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.110+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.113 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.110+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.113 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.110+0000 7efe2b702640 -1 librbd::ImageState: 0x55e99214a180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:00.113 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:43:00.110+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9928a0300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:43:09.796 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:43:09.798 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:43:09.802 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:43:09.802 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:43:10.326 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/306406911 2026-03-21T12:43:10.330 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:43:10.331 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774096990 2026-03-21T12:43:10.331 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:43:10.331 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/306406911 ]] 2026-03-21T12:43:10.331 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/306406911 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\0\6\4\0\6\9\1\1 ]] 2026-03-21T12:43:10.331 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:43:20.335 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:43:20.335 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:43:20.348 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:43:20.349 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:43:20.926 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2034162920 2026-03-21T12:43:20.927 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:43:20.927 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097000 2026-03-21T12:43:20.927 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:43:20.927 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2034162920 ]] 2026-03-21T12:43:20.927 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2034162920 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\0\6\4\0\6\9\1\1 ]] 2026-03-21T12:43:20.927 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/2034162920 2026-03-21T12:43:22.541 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/2034162920 until 2026-03-21T13:43:21.686340+0000 (3600 sec) 2026-03-21T12:43:22.572 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:43:22.579 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/2034162920 2026-03-21T12:43:23.062 INFO:tasks.workunit.client.0.vm01.stderr:listed 10 entries 2026-03-21T12:43:23.089 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/2034162920 2026-03-21T12:43:23.089 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:43:33.106 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:43:33.107 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:43:33.108 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:43:33.118 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:43:34.002 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2034162920 2026-03-21T12:43:34.003 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:43:34.006 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097014 2026-03-21T12:43:34.006 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:43:34.006 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2034162920 ]] 2026-03-21T12:43:34.006 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2034162920 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\0\3\4\1\6\2\9\2\0 ]] 2026-03-21T12:43:34.006 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:43:44.023 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:43:44.035 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:43:44.042 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:43:44.053 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:43:44.542 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2034162920 2026-03-21T12:43:44.542 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:43:44.543 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097024 2026-03-21T12:43:44.543 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:43:44.543 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2034162920 ]] 2026-03-21T12:43:44.543 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2034162920 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\0\3\4\1\6\2\9\2\0 ]] 2026-03-21T12:43:44.543 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:43:54.548 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:43:54.554 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:43:54.555 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:43:54.559 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:43:55.136 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2034162920 2026-03-21T12:43:55.136 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:43:55.136 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097035 2026-03-21T12:43:55.136 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:43:55.136 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2034162920 ]] 2026-03-21T12:43:55.137 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2034162920 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\0\3\4\1\6\2\9\2\0 ]] 2026-03-21T12:43:55.137 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:44:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.030+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.030+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.030+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.030+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991334e00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.030+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991334f00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.030+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.030+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335100 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.030+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991335000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.043 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.034+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.043 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.034+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.034+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991335300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.034+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.042+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.042+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.042+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.042+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991335400 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.042+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991335600 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.042+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.083 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.083 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.083 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.083 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.083 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.083 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.083 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.083 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.083 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.084 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.084 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991334e00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.084 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991334f00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.084 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335100 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.084 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.084 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.084 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.084 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335400 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.084 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.078+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335600 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.085 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.082+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.085 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.082+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.087 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.086+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:00.087 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:44:00.086+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:44:05.138 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:44:05.138 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:44:05.138 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:44:05.147 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:44:05.697 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2034162920 2026-03-21T12:44:05.699 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:44:05.700 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097045 2026-03-21T12:44:05.700 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:44:05.700 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2034162920 ]] 2026-03-21T12:44:05.700 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2034162920 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\0\3\4\1\6\2\9\2\0 ]] 2026-03-21T12:44:05.700 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:44:15.705 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:44:15.709 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:44:15.709 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:44:15.721 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:44:16.288 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2034162920 2026-03-21T12:44:16.288 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:44:16.289 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097056 2026-03-21T12:44:16.289 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:44:16.289 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2034162920 ]] 2026-03-21T12:44:16.289 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2034162920 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\0\3\4\1\6\2\9\2\0 ]] 2026-03-21T12:44:16.289 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:44:26.291 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:44:26.291 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:44:26.298 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:44:26.302 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:44:26.732 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/887815473 2026-03-21T12:44:26.732 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:44:26.734 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097066 2026-03-21T12:44:26.734 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:44:26.734 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/887815473 ]] 2026-03-21T12:44:26.734 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/887815473 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\0\3\4\1\6\2\9\2\0 ]] 2026-03-21T12:44:26.734 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/887815473 2026-03-21T12:44:28.703 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/887815473 until 2026-03-21T13:44:27.806144+0000 (3600 sec) 2026-03-21T12:44:28.731 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:44:28.731 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/887815473 2026-03-21T12:44:29.212 INFO:tasks.workunit.client.0.vm01.stderr:listed 11 entries 2026-03-21T12:44:29.233 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/887815473 2026-03-21T12:44:29.234 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:44:39.238 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:44:39.239 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:44:39.243 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:44:39.245 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:44:39.760 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/887815473 2026-03-21T12:44:39.761 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:44:39.765 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097079 2026-03-21T12:44:39.765 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:44:39.765 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/887815473 ]] 2026-03-21T12:44:39.765 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/887815473 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\8\7\8\1\5\4\7\3 ]] 2026-03-21T12:44:39.765 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:44:49.766 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:44:49.769 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:44:49.769 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:44:49.772 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:44:50.170 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/887815473 2026-03-21T12:44:50.170 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:44:50.170 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097090 2026-03-21T12:44:50.170 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:44:50.170 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/887815473 ]] 2026-03-21T12:44:50.170 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/887815473 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\8\7\8\1\5\4\7\3 ]] 2026-03-21T12:44:50.170 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:45:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3100 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3400 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3600 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.023 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.022+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.022+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.022+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.022+0000 7efe2b702640 -1 librbd::ImageState: 0x55e9929a3900 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.022+0000 7efe2b702640 -1 librbd::ImageState: 0x55e9929a3a00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.022+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3800 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.022+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.022+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3b00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.026+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.026+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3c00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.030 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.026+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.031 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.026+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3d00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.031 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.026+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.031 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.026+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.031 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.026+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.031 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.026+0000 7efe2b702640 -1 librbd::ImageState: 0x55e9929a3e00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.031 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.026+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992a52000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.031 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.026+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3f00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.030+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.030+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992a52100 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.034+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.034+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.034+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992a52300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:45:00.034+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992a52200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:45:00.172 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:45:00.173 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:45:00.178 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:45:00.181 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:45:00.803 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/887815473 2026-03-21T12:45:00.804 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:45:00.805 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097100 2026-03-21T12:45:00.805 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:45:00.805 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/887815473 ]] 2026-03-21T12:45:00.805 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/887815473 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\8\7\8\1\5\4\7\3 ]] 2026-03-21T12:45:00.805 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:45:10.809 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:45:10.809 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:45:10.817 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:45:10.818 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:45:11.547 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/887815473 2026-03-21T12:45:11.547 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:45:11.548 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097111 2026-03-21T12:45:11.548 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:45:11.548 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/887815473 ]] 2026-03-21T12:45:11.548 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/887815473 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\8\7\8\1\5\4\7\3 ]] 2026-03-21T12:45:11.548 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:45:21.551 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:45:21.551 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:45:21.552 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:45:21.563 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:45:22.019 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/887815473 2026-03-21T12:45:22.019 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:45:22.020 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097122 2026-03-21T12:45:22.020 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:45:22.020 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/887815473 ]] 2026-03-21T12:45:22.020 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/887815473 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\8\7\8\1\5\4\7\3 ]] 2026-03-21T12:45:22.020 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:45:32.023 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:45:32.024 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:45:32.030 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:45:32.035 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:45:32.523 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/833173849 2026-03-21T12:45:32.523 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:45:32.524 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097132 2026-03-21T12:45:32.524 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:45:32.524 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/833173849 ]] 2026-03-21T12:45:32.524 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/833173849 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\8\7\8\1\5\4\7\3 ]] 2026-03-21T12:45:32.524 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/833173849 2026-03-21T12:45:34.247 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/833173849 until 2026-03-21T13:45:33.398914+0000 (3600 sec) 2026-03-21T12:45:34.266 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:45:34.266 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/833173849 2026-03-21T12:45:34.959 INFO:tasks.workunit.client.0.vm01.stderr:listed 12 entries 2026-03-21T12:45:34.982 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/833173849 2026-03-21T12:45:34.982 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:45:44.998 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:45:45.004 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:45:45.025 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:45:45.025 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:45:45.469 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/833173849 2026-03-21T12:45:45.470 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:45:45.470 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097145 2026-03-21T12:45:45.471 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:45:45.471 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/833173849 ]] 2026-03-21T12:45:45.471 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/833173849 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\3\3\1\7\3\8\4\9 ]] 2026-03-21T12:45:45.471 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:45:55.498 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:45:55.503 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:45:55.503 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:45:55.504 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:45:56.051 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/833173849 2026-03-21T12:45:56.051 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:45:56.056 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097156 2026-03-21T12:45:56.056 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:45:56.056 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/833173849 ]] 2026-03-21T12:45:56.056 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/833173849 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\3\3\1\7\3\8\4\9 ]] 2026-03-21T12:45:56.056 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:46:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.006+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.006+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992a53d00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.014+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.014+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.014+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992a53e00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992a53f00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929e2000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929e2100 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9929e2200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929e2400 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9929e2300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929e2500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9929e2700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929e2600 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9929e2800 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929e2a00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.018+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9929e2900 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.022+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.022+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929e2100 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.022+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.022+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.022+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.022+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9929e2200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.022+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9929e2300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.022+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929e2400 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.022+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.022+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9929e2500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.026+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:46:00.026+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9929e2700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:46:06.058 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:46:06.061 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:46:06.065 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:46:06.071 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:46:06.634 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/833173849 2026-03-21T12:46:06.634 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:46:06.635 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097166 2026-03-21T12:46:06.635 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:46:06.635 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/833173849 ]] 2026-03-21T12:46:06.635 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/833173849 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\3\3\1\7\3\8\4\9 ]] 2026-03-21T12:46:06.635 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:46:08.069 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:46:08.066+0000 7f5cde659640 -1 reset not still connected to 0x5605781ec680 2026-03-21T12:46:08.069 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:46:08.066+0000 7f5cde659640 -1 reset not still connected to 0x5605789129c0 2026-03-21T12:46:08.072 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:46:08.066+0000 7f5cde659640 -1 reset not still connected to 0x560578fdc680 2026-03-21T12:46:08.072 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:46:08.070+0000 7f5cde659640 -1 reset not still connected to 0x560578fdd110 2026-03-21T12:46:08.072 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:46:08.070+0000 7f5cde659640 -1 reset not still connected to 0x560579404820 2026-03-21T12:46:08.072 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:46:08.070+0000 7f5cde659640 -1 reset not still connected to 0x560579668000 2026-03-21T12:46:08.072 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:46:08.070+0000 7f5cde659640 -1 reset not still connected to 0x560579904f70 2026-03-21T12:46:08.072 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:46:08.070+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1520 2026-03-21T12:46:16.637 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:46:16.637 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:46:16.637 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:46:16.643 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:46:17.255 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/833173849 2026-03-21T12:46:17.255 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:46:17.256 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097177 2026-03-21T12:46:17.256 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:46:17.256 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/833173849 ]] 2026-03-21T12:46:17.256 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/833173849 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\3\3\1\7\3\8\4\9 ]] 2026-03-21T12:46:17.256 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:46:27.257 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:46:27.257 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:46:27.259 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:46:27.271 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:46:27.835 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/833173849 2026-03-21T12:46:27.839 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:46:27.840 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097187 2026-03-21T12:46:27.840 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:46:27.840 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/833173849 ]] 2026-03-21T12:46:27.840 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/833173849 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\3\3\1\7\3\8\4\9 ]] 2026-03-21T12:46:27.840 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:46:37.842 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:46:37.842 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:46:37.842 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:46:37.852 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:46:38.444 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1841019168 2026-03-21T12:46:38.447 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:46:38.447 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097198 2026-03-21T12:46:38.447 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:46:38.447 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1841019168 ]] 2026-03-21T12:46:38.447 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1841019168 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\3\3\1\7\3\8\4\9 ]] 2026-03-21T12:46:38.447 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/1841019168 2026-03-21T12:46:39.860 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/1841019168 until 2026-03-21T13:46:39.006673+0000 (3600 sec) 2026-03-21T12:46:39.891 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:46:39.891 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/1841019168 2026-03-21T12:46:40.488 INFO:tasks.workunit.client.0.vm01.stderr:listed 13 entries 2026-03-21T12:46:40.514 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/1841019168 2026-03-21T12:46:40.514 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:46:50.520 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:46:50.522 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:46:50.527 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:46:50.535 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:46:51.552 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1841019168 2026-03-21T12:46:51.553 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:46:51.553 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097211 2026-03-21T12:46:51.553 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:46:51.553 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1841019168 ]] 2026-03-21T12:46:51.553 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1841019168 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\8\4\1\0\1\9\1\6\8 ]] 2026-03-21T12:46:51.553 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:47:00.005 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.002+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.005 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.002+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d12000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.005 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.002+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.005 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.002+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d12100 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.005 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.002+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.005 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.002+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d12200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.007 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.002+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.007 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.002+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d12300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.008 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.006+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.008 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.006+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d12400 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.010 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.006+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.010 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.006+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d12500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.011 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.006+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.011 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.006+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d12600 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.010+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.010+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d12700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.014 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.010+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.014 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.010+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d12800 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.014+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d12900 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.014+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d12a00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.019 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.014+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.019 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.014+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991d12b00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.018+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.020 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.018+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d12c00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.022 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.018+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.022 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.018+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991d12d00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.022+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.022+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d12e00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.026+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.026+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991d12f00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.026+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.026+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d13000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.031 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.026+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.031 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.026+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991d13100 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.038 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.034+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.038 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.034+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.038 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.034+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991d13300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:00.042 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:47:00.034+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d13200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:47:01.560 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:47:01.560 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:47:01.565 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:47:01.571 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:47:02.154 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1841019168 2026-03-21T12:47:02.181 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:47:02.181 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097222 2026-03-21T12:47:02.181 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:47:02.181 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1841019168 ]] 2026-03-21T12:47:02.182 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1841019168 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\8\4\1\0\1\9\1\6\8 ]] 2026-03-21T12:47:02.182 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:47:12.195 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:47:12.197 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:47:12.199 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:47:12.224 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:47:13.045 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1841019168 2026-03-21T12:47:13.046 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:47:13.061 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097233 2026-03-21T12:47:13.061 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:47:13.061 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1841019168 ]] 2026-03-21T12:47:13.061 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1841019168 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\8\4\1\0\1\9\1\6\8 ]] 2026-03-21T12:47:13.061 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:47:23.068 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:47:23.080 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:47:23.092 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:47:23.110 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:47:23.687 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1841019168 2026-03-21T12:47:23.687 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:47:23.687 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097243 2026-03-21T12:47:23.687 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:47:23.687 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1841019168 ]] 2026-03-21T12:47:23.687 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1841019168 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\8\4\1\0\1\9\1\6\8 ]] 2026-03-21T12:47:23.687 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:47:33.713 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:47:33.715 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:47:33.719 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:47:33.729 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:47:34.277 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/1841019168 2026-03-21T12:47:34.279 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:47:34.280 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097254 2026-03-21T12:47:34.280 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:47:34.280 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/1841019168 ]] 2026-03-21T12:47:34.280 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/1841019168 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\8\4\1\0\1\9\1\6\8 ]] 2026-03-21T12:47:34.280 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:47:44.292 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:47:44.295 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:47:44.301 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:47:44.312 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:47:44.862 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3769499686 2026-03-21T12:47:44.862 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:47:44.863 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097264 2026-03-21T12:47:44.863 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:47:44.863 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3769499686 ]] 2026-03-21T12:47:44.863 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3769499686 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\8\4\1\0\1\9\1\6\8 ]] 2026-03-21T12:47:44.863 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/3769499686 2026-03-21T12:47:46.827 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/3769499686 until 2026-03-21T13:47:45.938771+0000 (3600 sec) 2026-03-21T12:47:46.861 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:47:46.861 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/3769499686 2026-03-21T12:47:47.287 INFO:tasks.workunit.client.0.vm01.stderr:listed 14 entries 2026-03-21T12:47:47.334 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/3769499686 2026-03-21T12:47:47.334 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:47:57.342 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:47:57.347 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:47:57.347 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:47:57.375 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:47:58.009 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3769499686 2026-03-21T12:47:58.009 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:47:58.010 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097278 2026-03-21T12:47:58.010 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:47:58.010 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3769499686 ]] 2026-03-21T12:47:58.010 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3769499686 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\7\6\9\4\9\9\6\8\6 ]] 2026-03-21T12:47:58.010 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:48:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.006+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.006+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.010+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.010+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992541c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.010+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.010+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.019 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.010+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991d13a80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.019 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.010+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d13b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.019 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.014+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.019 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d13c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.023 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.022+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.022+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991d13d80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.022+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.025 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.022+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991d13e80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.026+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.026+0000 7efe2a700640 -1 librbd::ImageState: 0x55e991d13f80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.030+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.030+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992446080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.038 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.034+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.038 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.034+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992446180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.041 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.038+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.041 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.038+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992446280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.042+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.042+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992446380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.042+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.042+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992446480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.046+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.046+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992446580 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.054 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.046+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.054 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.046+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992446680 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.054+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.054+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.054+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992446780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.054+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992446880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.054+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.054+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992446980 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.067 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.062+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.067 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.062+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992446a80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.068 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.066+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:00.068 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:48:00.066+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992446b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:48:08.023 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:48:08.026 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:48:08.026 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:48:08.037 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:48:08.736 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3769499686 2026-03-21T12:48:08.737 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:48:08.737 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097288 2026-03-21T12:48:08.737 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:48:08.737 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3769499686 ]] 2026-03-21T12:48:08.737 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3769499686 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\7\6\9\4\9\9\6\8\6 ]] 2026-03-21T12:48:08.737 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:48:18.744 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:48:18.755 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:48:18.758 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:48:18.761 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:48:19.290 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3769499686 2026-03-21T12:48:19.290 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:48:19.290 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097299 2026-03-21T12:48:19.290 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:48:19.290 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3769499686 ]] 2026-03-21T12:48:19.290 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3769499686 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\7\6\9\4\9\9\6\8\6 ]] 2026-03-21T12:48:19.290 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:48:29.293 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:48:29.293 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:48:29.293 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:48:29.306 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:48:30.040 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3769499686 2026-03-21T12:48:30.041 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:48:30.050 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097310 2026-03-21T12:48:30.051 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:48:30.051 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3769499686 ]] 2026-03-21T12:48:30.051 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3769499686 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\7\6\9\4\9\9\6\8\6 ]] 2026-03-21T12:48:30.051 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:48:40.060 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:48:40.060 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:48:40.066 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:48:40.067 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:48:40.525 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3769499686 2026-03-21T12:48:40.525 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:48:40.525 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097320 2026-03-21T12:48:40.526 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:48:40.526 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3769499686 ]] 2026-03-21T12:48:40.526 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3769499686 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\7\6\9\4\9\9\6\8\6 ]] 2026-03-21T12:48:40.526 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:48:50.527 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:48:50.528 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:48:50.538 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:48:50.546 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:48:51.131 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2918169521 2026-03-21T12:48:51.132 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:48:51.133 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097331 2026-03-21T12:48:51.133 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:48:51.133 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2918169521 ]] 2026-03-21T12:48:51.133 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2918169521 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\7\6\9\4\9\9\6\8\6 ]] 2026-03-21T12:48:51.133 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/2918169521 2026-03-21T12:48:53.003 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/2918169521 until 2026-03-21T13:48:52.141052+0000 (3600 sec) 2026-03-21T12:48:53.071 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:48:53.072 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/2918169521 2026-03-21T12:48:53.582 INFO:tasks.workunit.client.0.vm01.stderr:listed 15 entries 2026-03-21T12:48:53.607 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/2918169521 2026-03-21T12:48:53.607 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:49:00.008 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.002+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.008 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.002+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991d13900 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.008 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.006+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.006+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.006+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9926a7900 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.006+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992447000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.006+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.006+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992447080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.010 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.006+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.010 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.006+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992447180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.011 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.006+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.011 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.006+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992447280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.022+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.022+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.022+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.022+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.022+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992447380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.022+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992447480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.022+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992447680 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.022+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992447580 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.034+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.034+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992447780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.034+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.034+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992447880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.037 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.034+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.037 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.034+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992447980 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.046+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.051 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.046+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.052 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.046+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992447a80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.052 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.046+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992447b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.054+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.054+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992447c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.054+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.054+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992447d80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.054+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.054+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992447e80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.058 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.054+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.058 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.054+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992447f80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.061 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.058+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:00.061 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:49:00.058+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992af6080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:49:03.609 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:49:03.609 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:49:03.614 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:49:03.644 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:49:04.190 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2918169521 2026-03-21T12:49:04.190 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:49:04.191 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097344 2026-03-21T12:49:04.191 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:49:04.191 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2918169521 ]] 2026-03-21T12:49:04.191 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2918169521 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\9\1\8\1\6\9\5\2\1 ]] 2026-03-21T12:49:04.191 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:49:14.202 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:49:14.209 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:49:14.221 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:49:14.222 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:49:14.819 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2918169521 2026-03-21T12:49:14.819 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:49:14.820 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097354 2026-03-21T12:49:14.820 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:49:14.820 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2918169521 ]] 2026-03-21T12:49:14.820 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2918169521 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\9\1\8\1\6\9\5\2\1 ]] 2026-03-21T12:49:14.820 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:49:24.827 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:49:24.827 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:49:24.830 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:49:24.844 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:49:25.528 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2918169521 2026-03-21T12:49:25.534 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:49:25.545 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097365 2026-03-21T12:49:25.557 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:49:25.557 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2918169521 ]] 2026-03-21T12:49:25.557 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2918169521 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\9\1\8\1\6\9\5\2\1 ]] 2026-03-21T12:49:25.557 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:49:35.547 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:49:35.547 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:49:35.551 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:49:35.561 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:49:36.192 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2918169521 2026-03-21T12:49:36.192 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:49:36.194 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097376 2026-03-21T12:49:36.194 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:49:36.194 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2918169521 ]] 2026-03-21T12:49:36.194 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2918169521 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\9\1\8\1\6\9\5\2\1 ]] 2026-03-21T12:49:36.194 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:49:46.196 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:49:46.196 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:49:46.207 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:49:46.219 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:49:46.905 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2918169521 2026-03-21T12:49:46.905 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:49:46.906 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097386 2026-03-21T12:49:46.906 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:49:46.906 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2918169521 ]] 2026-03-21T12:49:46.906 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2918169521 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\9\1\8\1\6\9\5\2\1 ]] 2026-03-21T12:49:46.906 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:49:56.916 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:49:56.919 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:49:56.921 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:49:56.921 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:49:57.666 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2701712514 2026-03-21T12:49:57.666 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:49:57.667 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097397 2026-03-21T12:49:57.667 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:49:57.667 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2701712514 ]] 2026-03-21T12:49:57.667 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2701712514 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\9\1\8\1\6\9\5\2\1 ]] 2026-03-21T12:49:57.667 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/2701712514 2026-03-21T12:49:59.580 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/2701712514 until 2026-03-21T13:49:58.702149+0000 (3600 sec) 2026-03-21T12:49:59.601 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:49:59.601 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/2701712514 2026-03-21T12:49:59.910 INFO:tasks.workunit.client.0.vm01.stderr:listed 16 entries 2026-03-21T12:49:59.924 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/2701712514 2026-03-21T12:49:59.924 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:50:00.035 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.026+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.035 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.026+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992af6480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.035 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.030+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.035 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.030+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.030+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.030+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.030+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.030+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.030+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af6380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.030+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af6280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.030+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af6180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.030+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af6580 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.030+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992af6780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.030+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af6680 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992af6880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992af6980 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992af6a80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992af6b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992af6c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992af6d80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af6e80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af6f80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af7080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.047 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.042+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af7180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.057 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.054+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.057 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.054+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.057 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.054+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.057 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.054+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992af7380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.057 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.054+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992af7480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:00.057 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:50:00.054+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af7280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:50:09.937 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:50:09.940 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:50:09.942 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:50:09.944 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:50:10.561 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2701712514 2026-03-21T12:50:10.561 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:50:10.562 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097410 2026-03-21T12:50:10.562 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:50:10.562 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2701712514 ]] 2026-03-21T12:50:10.562 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2701712514 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\7\0\1\7\1\2\5\1\4 ]] 2026-03-21T12:50:10.562 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:50:10.691 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:50:10.686+0000 7f5cde659640 -1 reset not still connected to 0x5605781ec680 2026-03-21T12:50:10.691 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:50:10.686+0000 7f5cde659640 -1 reset not still connected to 0x5605789129c0 2026-03-21T12:50:10.691 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:50:10.686+0000 7f5cde659640 -1 reset not still connected to 0x560578fdc680 2026-03-21T12:50:10.693 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:50:10.690+0000 7f5cde659640 -1 reset not still connected to 0x560578fdd110 2026-03-21T12:50:10.693 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:50:10.690+0000 7f5cde659640 -1 reset not still connected to 0x560579404820 2026-03-21T12:50:10.702 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:50:10.698+0000 7f5cde659640 -1 reset not still connected to 0x560579668000 2026-03-21T12:50:10.702 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:50:10.698+0000 7f5cde659640 -1 reset not still connected to 0x560579904f70 2026-03-21T12:50:10.702 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:50:10.698+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1520 2026-03-21T12:50:10.703 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:50:10.698+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1ba0 2026-03-21T12:50:10.703 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:50:10.698+0000 7f5cde659640 -1 reset not still connected to 0x560579c57ba0 2026-03-21T12:50:10.703 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:50:10.698+0000 7f5cde659640 -1 reset not still connected to 0x560579cd1ad0 2026-03-21T12:50:10.703 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:50:10.698+0000 7f5cde659640 -1 reset not still connected to 0x56057a14ed00 2026-03-21T12:50:10.703 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:50:10.698+0000 7f5cde659640 -1 reset not still connected to 0x56057a6c7930 2026-03-21T12:50:20.564 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:50:20.564 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:50:20.577 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:50:20.581 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:50:21.236 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2701712514 2026-03-21T12:50:21.236 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:50:21.237 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097421 2026-03-21T12:50:21.237 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:50:21.237 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2701712514 ]] 2026-03-21T12:50:21.237 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2701712514 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\7\0\1\7\1\2\5\1\4 ]] 2026-03-21T12:50:21.237 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:50:31.239 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:50:31.239 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:50:31.240 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:50:31.282 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:50:32.046 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2701712514 2026-03-21T12:50:32.046 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:50:32.047 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097432 2026-03-21T12:50:32.047 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:50:32.047 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2701712514 ]] 2026-03-21T12:50:32.047 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2701712514 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\7\0\1\7\1\2\5\1\4 ]] 2026-03-21T12:50:32.047 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:50:42.054 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:50:42.060 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:50:42.061 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:50:42.066 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:50:42.589 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2701712514 2026-03-21T12:50:42.590 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:50:42.591 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097442 2026-03-21T12:50:42.591 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:50:42.591 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2701712514 ]] 2026-03-21T12:50:42.591 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2701712514 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\7\0\1\7\1\2\5\1\4 ]] 2026-03-21T12:50:42.591 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:50:52.596 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:50:52.596 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:50:52.601 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:50:52.612 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:50:53.251 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2701712514 2026-03-21T12:50:53.252 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:50:53.252 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097453 2026-03-21T12:50:53.252 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:50:53.252 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2701712514 ]] 2026-03-21T12:50:53.252 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2701712514 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\7\0\1\7\1\2\5\1\4 ]] 2026-03-21T12:50:53.252 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:51:03.259 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:51:03.259 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:51:03.259 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:51:03.304 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:51:03.884 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3101190346 2026-03-21T12:51:03.884 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:51:03.885 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097463 2026-03-21T12:51:03.885 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:51:03.885 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3101190346 ]] 2026-03-21T12:51:03.885 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3101190346 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\7\0\1\7\1\2\5\1\4 ]] 2026-03-21T12:51:03.885 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/3101190346 2026-03-21T12:51:05.355 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/3101190346 until 2026-03-21T13:51:04.445535+0000 (3600 sec) 2026-03-21T12:51:05.390 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:51:05.390 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/3101190346 2026-03-21T12:51:05.910 INFO:tasks.workunit.client.0.vm01.stderr:listed 17 entries 2026-03-21T12:51:05.955 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/3101190346 2026-03-21T12:51:05.955 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:51:15.981 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:51:15.985 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:51:15.985 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:51:15.998 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:51:16.455 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3101190346 2026-03-21T12:51:16.455 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:51:16.456 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097476 2026-03-21T12:51:16.456 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:51:16.456 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3101190346 ]] 2026-03-21T12:51:16.456 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3101190346 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\1\0\1\1\9\0\3\4\6 ]] 2026-03-21T12:51:16.456 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:51:26.467 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:51:26.469 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:51:26.476 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:51:26.479 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:51:26.999 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3101190346 2026-03-21T12:51:26.999 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:51:27.000 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097486 2026-03-21T12:51:27.000 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:51:27.000 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3101190346 ]] 2026-03-21T12:51:27.000 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3101190346 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\1\0\1\1\9\0\3\4\6 ]] 2026-03-21T12:51:27.000 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:51:37.032 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:51:37.033 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:51:37.034 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:51:37.034 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:51:37.847 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3101190346 2026-03-21T12:51:37.847 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:51:37.848 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097497 2026-03-21T12:51:37.848 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:51:37.848 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3101190346 ]] 2026-03-21T12:51:37.848 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3101190346 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\1\0\1\1\9\0\3\4\6 ]] 2026-03-21T12:51:37.848 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:51:38.087 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T12:51:38.082+0000 7f4e5cdc1640 -1 reset not still connected to 0x5565ef75d380 2026-03-21T12:51:47.857 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:51:47.858 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:51:47.868 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:51:47.869 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:51:48.595 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3101190346 2026-03-21T12:51:48.596 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:51:48.597 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097508 2026-03-21T12:51:48.597 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:51:48.597 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3101190346 ]] 2026-03-21T12:51:48.597 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3101190346 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\1\0\1\1\9\0\3\4\6 ]] 2026-03-21T12:51:48.597 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:51:58.615 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:51:58.615 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:51:58.615 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:51:58.640 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:51:59.241 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3101190346 2026-03-21T12:51:59.241 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:51:59.242 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097519 2026-03-21T12:51:59.242 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:51:59.242 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3101190346 ]] 2026-03-21T12:51:59.242 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3101190346 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\1\0\1\1\9\0\3\4\6 ]] 2026-03-21T12:51:59.242 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:52:00.004 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.002+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.004 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.002+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d13480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d13680 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992446e00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335d80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991335f80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.014 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929e2900 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.014 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9912cc500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.014 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.014 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.010+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991334780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.014+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.014+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929e3f00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.022+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.022+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929e2d80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.026+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.026+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929e2080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.031 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.026+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.031 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.026+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af7a80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.034+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.034+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.034+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af7b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.034+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af7c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.037 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.034+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.037 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.034+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af7d00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.037 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.034+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.038 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.034+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af7e00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.042+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.042+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.042+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.042+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.042+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992af7f00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.042+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b94000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.042+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b94100 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:52:00.042+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b94200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:52:09.244 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:52:09.247 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:52:09.250 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:52:09.268 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:52:09.968 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3101190346 2026-03-21T12:52:09.968 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:52:09.969 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097529 2026-03-21T12:52:09.969 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:52:09.969 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3101190346 ]] 2026-03-21T12:52:09.969 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3101190346 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\1\0\1\1\9\0\3\4\6 ]] 2026-03-21T12:52:09.969 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:52:19.973 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:52:19.973 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:52:19.973 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:52:19.979 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:52:20.461 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3916209040 2026-03-21T12:52:20.461 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:52:20.469 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097540 2026-03-21T12:52:20.469 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:52:20.469 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3916209040 ]] 2026-03-21T12:52:20.469 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3916209040 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\1\0\1\1\9\0\3\4\6 ]] 2026-03-21T12:52:20.469 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/3916209040 2026-03-21T12:52:22.602 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/3916209040 until 2026-03-21T13:52:21.735503+0000 (3600 sec) 2026-03-21T12:52:22.627 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:52:22.628 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/3916209040 2026-03-21T12:52:23.096 INFO:tasks.workunit.client.0.vm01.stderr:listed 18 entries 2026-03-21T12:52:23.109 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/3916209040 2026-03-21T12:52:23.109 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:52:33.111 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:52:33.112 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:52:33.124 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:52:33.139 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:52:33.703 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3916209040 2026-03-21T12:52:33.703 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:52:33.703 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097553 2026-03-21T12:52:33.703 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:52:33.703 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3916209040 ]] 2026-03-21T12:52:33.704 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3916209040 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\9\1\6\2\0\9\0\4\0 ]] 2026-03-21T12:52:33.704 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:52:43.714 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:52:43.721 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:52:43.726 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:52:43.727 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:52:44.363 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3916209040 2026-03-21T12:52:44.368 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:52:44.369 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097564 2026-03-21T12:52:44.369 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:52:44.369 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3916209040 ]] 2026-03-21T12:52:44.369 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3916209040 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\9\1\6\2\0\9\0\4\0 ]] 2026-03-21T12:52:44.369 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:52:54.370 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:52:54.371 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:52:54.377 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:52:54.384 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:52:55.083 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3916209040 2026-03-21T12:52:55.083 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:52:55.084 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097575 2026-03-21T12:52:55.084 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:52:55.084 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3916209040 ]] 2026-03-21T12:52:55.084 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3916209040 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\9\1\6\2\0\9\0\4\0 ]] 2026-03-21T12:52:55.084 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:53:00.030 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.026+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.030 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.026+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b94a80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b94b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b94c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b94d80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b94e80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b94f80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b95080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b95180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b95280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.033 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.030+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b95380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.042 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.038+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.042 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.038+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b94a80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b94b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b94c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b94d80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b94e80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b95080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b94f80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.046+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b95180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.059 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.054+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.059 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.054+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b95280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.075 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.070+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:00.075 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:53:00.070+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b95380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:53:05.089 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:53:05.089 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:53:05.095 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:53:05.109 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:53:05.605 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3916209040 2026-03-21T12:53:05.605 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:53:05.608 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097585 2026-03-21T12:53:05.608 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:53:05.608 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3916209040 ]] 2026-03-21T12:53:05.608 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3916209040 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\9\1\6\2\0\9\0\4\0 ]] 2026-03-21T12:53:05.608 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:53:15.616 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:53:15.616 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:53:15.624 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:53:15.631 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:53:16.328 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/3916209040 2026-03-21T12:53:16.333 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:53:16.344 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097596 2026-03-21T12:53:16.345 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:53:16.345 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/3916209040 ]] 2026-03-21T12:53:16.345 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/3916209040 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\9\1\6\2\0\9\0\4\0 ]] 2026-03-21T12:53:16.345 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:53:26.359 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:53:26.360 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:53:26.363 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:53:26.368 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:53:26.806 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2442630233 2026-03-21T12:53:26.806 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:53:26.807 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097606 2026-03-21T12:53:26.807 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:53:26.807 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2442630233 ]] 2026-03-21T12:53:26.807 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2442630233 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\3\9\1\6\2\0\9\0\4\0 ]] 2026-03-21T12:53:26.807 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/2442630233 2026-03-21T12:53:28.816 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/2442630233 until 2026-03-21T13:53:27.917005+0000 (3600 sec) 2026-03-21T12:53:28.869 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:53:28.869 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/2442630233 2026-03-21T12:53:29.363 INFO:tasks.workunit.client.0.vm01.stderr:listed 19 entries 2026-03-21T12:53:29.391 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/2442630233 2026-03-21T12:53:29.391 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:53:33.460 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T12:53:33.458+0000 7f4e5cdc1640 -1 reset not still connected to 0x5565ef75d380 2026-03-21T12:53:39.396 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:53:39.397 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:53:39.397 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:53:39.412 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:53:40.050 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2442630233 2026-03-21T12:53:40.050 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:53:40.051 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097620 2026-03-21T12:53:40.051 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:53:40.051 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2442630233 ]] 2026-03-21T12:53:40.051 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2442630233 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\4\4\2\6\3\0\2\3\3 ]] 2026-03-21T12:53:40.051 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:53:50.055 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:53:50.055 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:53:50.055 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:53:50.075 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:53:50.648 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2442630233 2026-03-21T12:53:50.648 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:53:50.660 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097630 2026-03-21T12:53:50.660 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:53:50.660 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2442630233 ]] 2026-03-21T12:53:50.660 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2442630233 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\4\4\2\6\3\0\2\3\3 ]] 2026-03-21T12:53:50.660 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:54:00.004 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.002+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.004 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.002+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992a52500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.004 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.002+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.004 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.002+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992a52700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::ImageState: 0x55e9928a0e00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::ImageState: 0x55e9929e3680 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::ImageState: 0x55e9929a2f00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992af7980 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992af6700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991334280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.012 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991c06f00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992af7800 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.016 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.010+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992af7880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.014+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.014+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992b94880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.019 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.014+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.019 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.014+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992b95880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.022 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.018+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.022 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.018+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991c06d00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.022+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.022+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992b95d80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.030 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.022+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.030 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.022+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992b95e80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.030+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.030+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992b95f80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.030+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.030+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992c6e080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.030+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.030+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992c6e180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.034+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.037 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:54:00.034+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992c6e280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:54:00.666 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:54:00.670 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:54:00.678 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:54:00.695 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:54:01.143 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2442630233 2026-03-21T12:54:01.143 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:54:01.149 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097641 2026-03-21T12:54:01.150 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:54:01.150 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2442630233 ]] 2026-03-21T12:54:01.150 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2442630233 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\4\4\2\6\3\0\2\3\3 ]] 2026-03-21T12:54:01.150 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:54:11.151 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:54:11.152 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:54:11.153 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:54:11.153 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:54:11.751 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2442630233 2026-03-21T12:54:11.763 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:54:11.764 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097651 2026-03-21T12:54:11.764 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:54:11.764 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2442630233 ]] 2026-03-21T12:54:11.764 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2442630233 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\4\4\2\6\3\0\2\3\3 ]] 2026-03-21T12:54:11.764 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:54:21.794 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:54:21.794 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:54:21.810 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:54:21.812 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:54:22.513 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2442630233 2026-03-21T12:54:22.514 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:54:22.515 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097662 2026-03-21T12:54:22.515 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:54:22.515 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2442630233 ]] 2026-03-21T12:54:22.515 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2442630233 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\4\4\2\6\3\0\2\3\3 ]] 2026-03-21T12:54:22.515 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:54:32.520 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:54:32.521 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:54:32.532 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:54:32.532 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:54:33.048 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/2442630233 2026-03-21T12:54:33.048 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:54:33.049 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097673 2026-03-21T12:54:33.049 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:54:33.049 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/2442630233 ]] 2026-03-21T12:54:33.049 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/2442630233 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\4\4\2\6\3\0\2\3\3 ]] 2026-03-21T12:54:33.049 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:54:43.050 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:54:43.051 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:54:43.059 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:54:43.063 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:54:43.646 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/840331491 2026-03-21T12:54:43.646 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:54:43.646 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097683 2026-03-21T12:54:43.646 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:54:43.647 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/840331491 ]] 2026-03-21T12:54:43.647 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/840331491 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\2\4\4\2\6\3\0\2\3\3 ]] 2026-03-21T12:54:43.647 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/840331491 2026-03-21T12:54:45.553 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/840331491 until 2026-03-21T13:54:44.646459+0000 (3600 sec) 2026-03-21T12:54:45.575 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:54:45.576 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/840331491 2026-03-21T12:54:46.131 INFO:tasks.workunit.client.0.vm01.stderr:listed 20 entries 2026-03-21T12:54:46.157 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/840331491 2026-03-21T12:54:46.157 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:54:56.162 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:54:56.162 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:54:56.169 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:54:56.196 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:54:56.759 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/840331491 2026-03-21T12:54:56.761 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:54:56.762 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097696 2026-03-21T12:54:56.762 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:54:56.762 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/840331491 ]] 2026-03-21T12:54:56.762 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/840331491 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\4\0\3\3\1\4\9\1 ]] 2026-03-21T12:54:56.762 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:55:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.007+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.007+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.007+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.007+0000 7efe2a700640 -1 librbd::ImageState: 0x55e9929a2280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.007+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b94600 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.007+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b95a00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.007+0000 7efe2a700640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.009 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.007+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992b95c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.010 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.007+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.010 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.007+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b95b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.011 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.011+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.011 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.011+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6e400 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.011+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.011+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6e480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.014 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.011+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.014 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.011+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b94000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.031+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.031+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6e580 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.055+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.056 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.055+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6e680 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.063 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.063+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.064 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.063+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6e780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.067+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.067+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6e880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.075 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.071+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.075 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.071+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6e980 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.081 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.079+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.081 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.079+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6ea80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.083 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.083+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.083 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.083+0000 7efe2a700640 -1 librbd::ImageState: 0x55e992c6eb80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.083+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.083+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6ec80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.083+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.083+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6ed80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.088 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.087+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.088 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.087+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6ee80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.090 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.087+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.091 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.087+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6ef80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.093 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.091+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:00.093 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:55:00.091+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6f080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:55:06.768 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:55:06.770 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:55:06.777 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:55:06.789 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:55:07.209 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/840331491 2026-03-21T12:55:07.210 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:55:07.211 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097707 2026-03-21T12:55:07.211 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:55:07.211 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/840331491 ]] 2026-03-21T12:55:07.211 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/840331491 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\4\0\3\3\1\4\9\1 ]] 2026-03-21T12:55:07.211 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:55:17.216 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:55:17.221 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:55:17.221 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:55:17.222 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:55:17.897 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/840331491 2026-03-21T12:55:17.897 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:55:17.918 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097717 2026-03-21T12:55:17.918 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:55:17.918 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/840331491 ]] 2026-03-21T12:55:17.918 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/840331491 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\4\0\3\3\1\4\9\1 ]] 2026-03-21T12:55:17.918 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:55:27.930 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:55:27.936 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:55:27.937 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:55:27.940 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:55:28.555 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/840331491 2026-03-21T12:55:28.563 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:55:28.564 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097728 2026-03-21T12:55:28.564 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:55:28.564 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/840331491 ]] 2026-03-21T12:55:28.564 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/840331491 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\4\0\3\3\1\4\9\1 ]] 2026-03-21T12:55:28.564 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:55:38.567 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:55:38.567 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:55:38.579 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:55:38.595 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:55:39.001 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/840331491 2026-03-21T12:55:39.001 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:55:39.004 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097739 2026-03-21T12:55:39.004 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:55:39.004 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/840331491 ]] 2026-03-21T12:55:39.004 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/840331491 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\4\0\3\3\1\4\9\1 ]] 2026-03-21T12:55:39.004 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:55:49.011 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:55:49.017 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:55:49.023 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:55:49.040 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:55:49.691 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/166027568 2026-03-21T12:55:49.700 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:55:49.712 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097749 2026-03-21T12:55:49.712 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:55:49.712 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/166027568 ]] 2026-03-21T12:55:49.712 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/166027568 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\8\4\0\3\3\1\4\9\1 ]] 2026-03-21T12:55:49.712 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/166027568 2026-03-21T12:55:51.778 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/166027568 until 2026-03-21T13:55:50.766752+0000 (3600 sec) 2026-03-21T12:55:51.808 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:55:51.812 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/166027568 2026-03-21T12:55:52.359 INFO:tasks.workunit.client.0.vm01.stderr:listed 21 entries 2026-03-21T12:55:52.375 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/166027568 2026-03-21T12:55:52.375 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:55:56.703 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T12:55:56.699+0000 7f4e5cdc1640 -1 reset not still connected to 0x5565ef75d380 2026-03-21T12:56:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.019+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.024 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.023+0000 7efe2b702640 -1 librbd::ImageState: 0x55e9926a7500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.030 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.027+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.030 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.027+0000 7efe2b702640 -1 librbd::ImageState: 0x55e9928a0a00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.031+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.031+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.031+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.031+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.031+0000 7efe2b702640 -1 librbd::ImageState: 0x55e991f18f00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.031+0000 7efe2b702640 -1 librbd::ImageState: 0x55e9929e2b80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.031+0000 7efe2b702640 -1 librbd::ImageState: 0x55e9929e2080 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.032 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.031+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992af7c00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.035 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.031+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.035 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.031+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.035 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.031+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992af7a80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.035 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.031+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992af7c80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.035+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.035+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.035+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992b95800 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.035+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992b95400 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.035+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.035+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992af7a00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.035+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.035+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992c6f380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.043+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.044 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.043+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992c6f280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.043+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.043+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.043+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.043+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992b95200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.043+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992c6f200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.045 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.043+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992c6f400 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.046 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.043+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.046 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.043+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992c6f500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.047+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.048 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.047+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992c6f600 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.049 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.047+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.049 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.047+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992c6f700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.053 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.051+0000 7efe2b702640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:00.053 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:56:00.051+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992c6f800 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:56:02.384 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:56:02.387 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:56:02.390 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:56:02.411 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:56:03.173 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/166027568 2026-03-21T12:56:03.173 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:56:03.174 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097763 2026-03-21T12:56:03.174 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:56:03.174 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/166027568 ]] 2026-03-21T12:56:03.174 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/166027568 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\6\6\0\2\7\5\6\8 ]] 2026-03-21T12:56:03.174 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:56:13.176 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:56:13.178 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:56:13.179 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:56:13.199 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:56:13.857 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/166027568 2026-03-21T12:56:13.858 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:56:13.859 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097773 2026-03-21T12:56:13.859 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:56:13.859 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/166027568 ]] 2026-03-21T12:56:13.859 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/166027568 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\6\6\0\2\7\5\6\8 ]] 2026-03-21T12:56:13.859 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:56:14.574 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T12:56:14.571+0000 7f2abfb51640 -1 log_channel(cluster) log [ERR] : Health check failed: mon a is very low on available space (MON_DISK_CRIT) 2026-03-21T12:56:23.861 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:56:23.861 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:56:23.861 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:56:23.881 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:56:24.573 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/166027568 2026-03-21T12:56:24.573 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:56:24.582 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097784 2026-03-21T12:56:24.582 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:56:24.582 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/166027568 ]] 2026-03-21T12:56:24.582 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/166027568 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\6\6\0\2\7\5\6\8 ]] 2026-03-21T12:56:24.582 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:56:34.591 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:56:34.595 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:56:34.600 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:56:34.603 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:56:35.451 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/166027568 2026-03-21T12:56:35.451 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:56:35.452 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097795 2026-03-21T12:56:35.452 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:56:35.452 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/166027568 ]] 2026-03-21T12:56:35.452 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/166027568 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\6\6\0\2\7\5\6\8 ]] 2026-03-21T12:56:35.452 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:56:45.454 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:56:45.460 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:56:45.466 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:56:45.484 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:56:45.939 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/166027568 2026-03-21T12:56:45.939 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:56:45.940 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097805 2026-03-21T12:56:45.940 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:56:45.940 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/166027568 ]] 2026-03-21T12:56:45.940 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/166027568 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\6\6\0\2\7\5\6\8 ]] 2026-03-21T12:56:45.940 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:56:55.951 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:56:55.951 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:56:55.956 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:56:55.973 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:56:56.817 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/166027568 2026-03-21T12:56:56.817 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:56:56.818 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097816 2026-03-21T12:56:56.818 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:56:56.818 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/166027568 ]] 2026-03-21T12:56:56.818 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/166027568 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\6\6\0\2\7\5\6\8 ]] 2026-03-21T12:56:56.818 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:57:06.825 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:57:06.825 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:57:06.840 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:57:06.852 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:57:07.399 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/753428044 2026-03-21T12:57:07.399 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:57:07.399 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097827 2026-03-21T12:57:07.399 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:57:07.399 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/753428044 ]] 2026-03-21T12:57:07.399 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/753428044 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\1\6\6\0\2\7\5\6\8 ]] 2026-03-21T12:57:07.399 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/753428044 2026-03-21T12:57:09.000 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/753428044 until 2026-03-21T13:57:08.122569+0000 (3600 sec) 2026-03-21T12:57:09.023 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:57:09.023 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/753428044 2026-03-21T12:57:09.480 INFO:tasks.workunit.client.0.vm01.stderr:listed 22 entries 2026-03-21T12:57:09.508 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/753428044 2026-03-21T12:57:09.509 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:57:19.511 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:57:19.519 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:57:19.547 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:57:19.547 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:57:20.080 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/753428044 2026-03-21T12:57:20.086 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:57:20.087 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097840 2026-03-21T12:57:20.087 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:57:20.087 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/753428044 ]] 2026-03-21T12:57:20.087 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/753428044 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\7\5\3\4\2\8\0\4\4 ]] 2026-03-21T12:57:20.087 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:57:30.093 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:57:30.096 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:57:30.105 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:57:30.119 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:57:30.717 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/753428044 2026-03-21T12:57:30.717 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:57:30.717 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097850 2026-03-21T12:57:30.717 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:57:30.717 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/753428044 ]] 2026-03-21T12:57:30.717 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/753428044 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\7\5\3\4\2\8\0\4\4 ]] 2026-03-21T12:57:30.717 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:57:40.719 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:57:40.719 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:57:40.719 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:57:40.731 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:57:41.174 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/753428044 2026-03-21T12:57:41.175 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:57:41.176 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097861 2026-03-21T12:57:41.176 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:57:41.176 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/753428044 ]] 2026-03-21T12:57:41.176 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/753428044 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\7\5\3\4\2\8\0\4\4 ]] 2026-03-21T12:57:41.176 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:57:51.181 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:57:51.182 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:57:51.184 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:57:51.194 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:57:51.695 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/753428044 2026-03-21T12:57:51.695 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:57:51.696 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097871 2026-03-21T12:57:51.696 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:57:51.696 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/753428044 ]] 2026-03-21T12:57:51.696 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/753428044 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\7\5\3\4\2\8\0\4\4 ]] 2026-03-21T12:57:51.696 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:58:01.697 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:58:01.702 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:58:01.707 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:58:01.713 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:58:01.918 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:58:01.915+0000 7f5cde659640 -1 reset not still connected to 0x560579404820 2026-03-21T12:58:01.927 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:58:01.923+0000 7f5cde659640 -1 reset not still connected to 0x560579668000 2026-03-21T12:58:01.927 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:58:01.923+0000 7f5cde659640 -1 reset not still connected to 0x560579904f70 2026-03-21T12:58:01.927 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:58:01.923+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1520 2026-03-21T12:58:01.927 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:58:01.923+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1ba0 2026-03-21T12:58:01.927 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:58:01.923+0000 7f5cde659640 -1 reset not still connected to 0x560579c57ba0 2026-03-21T12:58:01.927 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:58:01.923+0000 7f5cde659640 -1 reset not still connected to 0x560579cd1ad0 2026-03-21T12:58:01.928 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:58:01.923+0000 7f5cde659640 -1 reset not still connected to 0x56057a14ed00 2026-03-21T12:58:01.928 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:58:01.923+0000 7f5cde659640 -1 reset not still connected to 0x56057a6c7930 2026-03-21T12:58:02.308 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/753428044 2026-03-21T12:58:02.308 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:58:02.309 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097882 2026-03-21T12:58:02.309 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:58:02.309 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/753428044 ]] 2026-03-21T12:58:02.309 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/753428044 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\7\5\3\4\2\8\0\4\4 ]] 2026-03-21T12:58:02.309 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:58:12.312 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:58:12.320 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:58:12.320 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:58:12.325 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:58:12.795 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/604329374 2026-03-21T12:58:12.795 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:58:12.800 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097892 2026-03-21T12:58:12.800 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:58:12.800 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/604329374 ]] 2026-03-21T12:58:12.800 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/604329374 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\7\5\3\4\2\8\0\4\4 ]] 2026-03-21T12:58:12.800 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/604329374 2026-03-21T12:58:14.231 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/604329374 until 2026-03-21T13:58:13.326181+0000 (3600 sec) 2026-03-21T12:58:14.268 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/604329374 2026-03-21T12:58:14.273 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:58:14.764 INFO:tasks.workunit.client.0.vm01.stderr:listed 23 entries 2026-03-21T12:58:14.816 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/604329374 2026-03-21T12:58:14.816 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:58:24.830 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:58:24.831 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:58:24.831 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:58:24.847 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:58:25.450 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/604329374 2026-03-21T12:58:25.450 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:58:25.450 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097905 2026-03-21T12:58:25.450 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:58:25.450 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/604329374 ]] 2026-03-21T12:58:25.450 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/604329374 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\6\0\4\3\2\9\3\7\4 ]] 2026-03-21T12:58:25.450 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:58:28.837 INFO:tasks.ceph.osd.1.vm01.stderr:problem writing to /var/log/ceph/ceph-osd.1.log: (28) No space left on device 2026-03-21T12:58:28.837 INFO:tasks.ceph.osd.0.vm01.stderr:problem writing to /var/log/ceph/ceph-osd.0.log: (28) No space left on device 2026-03-21T12:58:28.837 INFO:tasks.ceph.osd.2.vm01.stderr:problem writing to /var/log/ceph/ceph-osd.2.log: (28) No space left on device 2026-03-21T12:58:28.891 INFO:tasks.ceph.mgr.x.vm01.stderr:problem writing to /var/log/ceph/ceph-mgr.x.log: (28) No space left on device 2026-03-21T12:58:29.170 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T12:58:35.468 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:58:35.477 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:58:35.478 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:58:35.506 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:58:36.116 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/604329374 2026-03-21T12:58:36.116 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:58:36.117 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097916 2026-03-21T12:58:36.117 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:58:36.117 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/604329374 ]] 2026-03-21T12:58:36.117 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/604329374 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\6\0\4\3\2\9\3\7\4 ]] 2026-03-21T12:58:36.117 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:58:43.906 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T12:58:43.903+0000 7f4e5cdc1640 -1 reset not still connected to 0x5565ef75d380 2026-03-21T12:58:46.119 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:58:46.122 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:58:46.123 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:58:46.134 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:58:46.612 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/604329374 2026-03-21T12:58:46.612 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:58:46.613 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097926 2026-03-21T12:58:46.613 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:58:46.613 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/604329374 ]] 2026-03-21T12:58:46.613 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/604329374 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\6\0\4\3\2\9\3\7\4 ]] 2026-03-21T12:58:46.613 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:58:56.615 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:58:56.615 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:58:56.619 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:58:56.627 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:58:57.163 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/604329374 2026-03-21T12:58:57.163 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:58:57.164 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097937 2026-03-21T12:58:57.164 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:58:57.164 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/604329374 ]] 2026-03-21T12:58:57.164 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/604329374 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\6\0\4\3\2\9\3\7\4 ]] 2026-03-21T12:58:57.164 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:59:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.007+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.007+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.007+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.007+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992006e00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.007+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992007200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.013 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.007+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992007580 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.007+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.015 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.007+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992007480 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.011+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.017 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.011+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992007380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.023+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.023+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.023+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992007280 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.028 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.023+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992007600 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.023+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.029 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.023+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992007700 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.036 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.031+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.038 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.031+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992007800 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.063+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.069 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.063+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992007900 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.073 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.067+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.073 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.067+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992007a00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.076 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.071+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.076 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.071+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992007b00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.079+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.079+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.079+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.079+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.079+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992007d00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.079+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992007e00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.079+0000 7efe2b702640 -1 librbd::ImageState: 0x55e992007f00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.079+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992007c00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.079+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.079+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992d12000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.079+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.086 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.079+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992d12100 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.087 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.083+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.087 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.083+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992d12200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.088 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.083+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:00.088 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T12:59:00.083+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992d12300 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T12:59:07.167 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:59:07.172 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:59:07.192 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:59:07.212 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:59:07.584 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/604329374 2026-03-21T12:59:07.584 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:59:07.592 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097947 2026-03-21T12:59:07.592 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:59:07.592 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/604329374 ]] 2026-03-21T12:59:07.592 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/604329374 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\6\0\4\3\2\9\3\7\4 ]] 2026-03-21T12:59:07.592 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:59:17.594 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:59:17.594 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:59:17.602 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:59:17.604 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:59:18.264 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/604329374 2026-03-21T12:59:18.273 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:59:18.273 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097958 2026-03-21T12:59:18.273 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:59:18.273 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/604329374 ]] 2026-03-21T12:59:18.273 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/604329374 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\6\0\4\3\2\9\3\7\4 ]] 2026-03-21T12:59:18.273 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:59:28.287 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:59:28.292 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:59:28.298 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:59:28.320 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:59:29.015 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/74241780 2026-03-21T12:59:29.015 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:59:29.016 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097969 2026-03-21T12:59:29.016 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:59:29.016 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/74241780 ]] 2026-03-21T12:59:29.016 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/74241780 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\6\0\4\3\2\9\3\7\4 ]] 2026-03-21T12:59:29.016 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist add 192.168.123.101:0/74241780 2026-03-21T12:59:30.572 INFO:tasks.workunit.client.0.vm01.stderr:blocklisting 192.168.123.101:0/74241780 until 2026-03-21T13:59:29.698599+0000 (3600 sec) 2026-03-21T12:59:30.607 INFO:tasks.workunit.client.0.vm01.stderr:+ ceph osd blocklist ls 2026-03-21T12:59:30.610 INFO:tasks.workunit.client.0.vm01.stderr:+ grep -q 192.168.123.101:0/74241780 2026-03-21T12:59:31.027 INFO:tasks.workunit.client.0.vm01.stderr:listed 24 entries 2026-03-21T12:59:31.078 INFO:tasks.workunit.client.0.vm01.stderr:+ PREV_CLIENT_ADDR=192.168.123.101:0/74241780 2026-03-21T12:59:31.078 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:59:33.760 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:33.755+0000 7f5cde659640 -1 reset not still connected to 0x560579904f70 2026-03-21T12:59:33.772 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:33.767+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1520 2026-03-21T12:59:33.772 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:33.767+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1ba0 2026-03-21T12:59:33.772 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:33.767+0000 7f5cde659640 -1 reset not still connected to 0x560579c57ba0 2026-03-21T12:59:33.772 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:33.767+0000 7f5cde659640 -1 reset not still connected to 0x56057a14ed00 2026-03-21T12:59:33.772 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:33.767+0000 7f5cde659640 -1 reset not still connected to 0x56057a6c7930 2026-03-21T12:59:41.085 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:59:41.086 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:59:41.091 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:59:41.093 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:59:41.722 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/74241780 2026-03-21T12:59:41.722 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:59:41.723 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097981 2026-03-21T12:59:41.723 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:59:41.723 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/74241780 ]] 2026-03-21T12:59:41.723 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/74241780 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\7\4\2\4\1\7\8\0 ]] 2026-03-21T12:59:41.723 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:59:51.725 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T12:59:51.725 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T12:59:51.728 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T12:59:51.738 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T12:59:52.337 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/74241780 2026-03-21T12:59:52.337 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T12:59:52.340 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774097992 2026-03-21T12:59:52.340 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T12:59:52.340 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/74241780 ]] 2026-03-21T12:59:52.340 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/74241780 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\7\4\2\4\1\7\8\0 ]] 2026-03-21T12:59:52.340 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T12:59:53.291 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:53.287+0000 7f5cde659640 -1 reset not still connected to 0x5605781ec680 2026-03-21T12:59:53.292 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:53.287+0000 7f5cde659640 -1 reset not still connected to 0x5605789129c0 2026-03-21T12:59:53.297 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:53.291+0000 7f5cde659640 -1 reset not still connected to 0x560578fdc680 2026-03-21T12:59:53.297 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:53.291+0000 7f5cde659640 -1 reset not still connected to 0x560578fdd110 2026-03-21T12:59:53.297 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:53.291+0000 7f5cde659640 -1 reset not still connected to 0x560579404820 2026-03-21T12:59:53.297 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:53.291+0000 7f5cde659640 -1 reset not still connected to 0x560579668000 2026-03-21T12:59:53.297 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:53.291+0000 7f5cde659640 -1 reset not still connected to 0x560579904f70 2026-03-21T12:59:53.297 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:53.291+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1520 2026-03-21T12:59:53.298 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:53.291+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1ba0 2026-03-21T12:59:53.298 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:53.291+0000 7f5cde659640 -1 reset not still connected to 0x560579c57ba0 2026-03-21T12:59:53.298 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:53.291+0000 7f5cde659640 -1 reset not still connected to 0x560579cd1ad0 2026-03-21T12:59:53.298 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:53.291+0000 7f5cde659640 -1 reset not still connected to 0x56057a14ed00 2026-03-21T12:59:53.298 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T12:59:53.291+0000 7f5cde659640 -1 reset not still connected to 0x56057a6c7930 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a2a80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::ImageState: 0x55e9929a3500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991c06d00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991c06880 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992006e80 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::ImageState: 0x55e99214aa00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.027 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.019+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991c06000 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.042 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.035+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.042 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.035+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.042 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.035+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.042 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.035+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992a53500 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.042 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.035+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992a52600 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.042 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.035+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992006380 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.060 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.055+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.060 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.055+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b94900 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.067 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.059+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.067 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.059+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992a52400 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.081 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.075+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.081 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.075+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.081 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.075+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992a52780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.081 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.075+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d13200 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.085 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.079+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.085 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.079+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.085 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.079+0000 7efe2af01640 -1 librbd::ImageState: 0x55e991d13180 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.085 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.079+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b95580 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.092 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.087+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.092 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.087+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.092 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.087+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992b94780 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.092 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.087+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992d12800 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.099 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.091+0000 7efe2af01640 -1 librbd::image::OpenRequest: failed to retrieve name: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:00.100 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:00:00.095+0000 7efe2af01640 -1 librbd::ImageState: 0x55e992c6ff00 failed to open image: (108) Cannot send after transport endpoint shutdown 2026-03-21T13:00:02.343 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T13:00:02.343 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T13:00:02.348 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T13:00:02.351 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T13:00:02.849 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/74241780 2026-03-21T13:00:02.849 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T13:00:02.850 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774098002 2026-03-21T13:00:02.850 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T13:00:02.850 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/74241780 ]] 2026-03-21T13:00:02.850 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/74241780 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\7\4\2\4\1\7\8\0 ]] 2026-03-21T13:00:02.850 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T13:00:12.852 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T13:00:12.856 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T13:00:12.861 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T13:00:12.863 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T13:00:13.476 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/74241780 2026-03-21T13:00:13.476 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T13:00:13.477 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774098013 2026-03-21T13:00:13.477 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T13:00:13.477 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/74241780 ]] 2026-03-21T13:00:13.477 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/74241780 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\7\4\2\4\1\7\8\0 ]] 2026-03-21T13:00:13.477 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T13:00:23.479 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T13:00:23.479 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T13:00:23.482 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T13:00:23.487 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T13:00:24.184 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR=192.168.123.101:0/74241780 2026-03-21T13:00:24.184 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T13:00:24.184 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774098024 2026-03-21T13:00:24.184 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T13:00:24.184 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n 192.168.123.101:0/74241780 ]] 2026-03-21T13:00:24.184 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ 192.168.123.101:0/74241780 != \1\9\2\.\1\6\8\.\1\2\3\.\1\0\1\:\0\/\7\4\2\4\1\7\8\0 ]] 2026-03-21T13:00:24.184 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T13:00:29.835 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T13:00:29.827+0000 7f2abfb51640 -1 rocksdb: submit_common error: IO error: No space left on device: While appending to file: /var/lib/ceph/mon/ceph-a/store.db/000038.sst: No space left on device code =  Rocksdb transaction: 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr:PutCF( prefix = paxos key = '1954' value size = 4672) 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr:PutCF( prefix = paxos key = 'pending_v' value size = 8) 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr:PutCF( prefix = paxos key = 'pending_pn' value size = 8) 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr:./src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef)' thread 7f2abfb51640 time 2026-03-21T13:00:29.833942+0000 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr:./src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - None) 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc7) [0x7f2ac6075544] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 2: ceph-mon(+0x26c3a0) [0x557c482893a0] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x966) [0x557c483fe196] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 4: (Paxos::propose_pending()+0x13f) [0x557c483febdf] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 5: (Paxos::trigger_propose()+0x146) [0x557c483ff776] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 6: (PaxosService::propose_pending()+0x288) [0x557c48407f08] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 7: ceph-mon(+0x26c6ad) [0x557c482896ad] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 8: (CommonSafeTimer::timer_thread()+0x124) [0x7f2ac617dcc4] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 9: /usr/lib/ceph/libceph-common.so.2(+0x295751) [0x7f2ac617e751] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 10: /lib/x86_64-linux-gnu/libc.so.6(+0x94ac3) [0x7f2ac576bac3] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 11: /lib/x86_64-linux-gnu/libc.so.6(+0x1268d0) [0x7f2ac57fd8d0] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T13:00:29.827+0000 7f2abfb51640 -1 ./src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef)' thread 7f2abfb51640 time 2026-03-21T13:00:29.833942+0000 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr:./src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - None) 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc7) [0x7f2ac6075544] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 2: ceph-mon(+0x26c3a0) [0x557c482893a0] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x966) [0x557c483fe196] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 4: (Paxos::propose_pending()+0x13f) [0x557c483febdf] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 5: (Paxos::trigger_propose()+0x146) [0x557c483ff776] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 6: (PaxosService::propose_pending()+0x288) [0x557c48407f08] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 7: ceph-mon(+0x26c6ad) [0x557c482896ad] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 8: (CommonSafeTimer::timer_thread()+0x124) [0x7f2ac617dcc4] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 9: /usr/lib/ceph/libceph-common.so.2(+0x295751) [0x7f2ac617e751] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 10: /lib/x86_64-linux-gnu/libc.so.6(+0x94ac3) [0x7f2ac576bac3] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 11: /lib/x86_64-linux-gnu/libc.so.6(+0x1268d0) [0x7f2ac57fd8d0] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr:*** Caught signal (Aborted) ** 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: in thread 7f2abfb51640 thread_name:safe_timer 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - None) 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 1: /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x7f2ac5719520] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 2: pthread_kill() 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 3: raise() 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 4: abort() 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x17f) [0x7f2ac60755fc] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 6: ceph-mon(+0x26c3a0) [0x557c482893a0] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x966) [0x557c483fe196] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 8: (Paxos::propose_pending()+0x13f) [0x557c483febdf] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 9: (Paxos::trigger_propose()+0x146) [0x557c483ff776] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 10: (PaxosService::propose_pending()+0x288) [0x557c48407f08] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 11: ceph-mon(+0x26c6ad) [0x557c482896ad] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 12: (CommonSafeTimer::timer_thread()+0x124) [0x7f2ac617dcc4] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 13: /usr/lib/ceph/libceph-common.so.2(+0x295751) [0x7f2ac617e751] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 14: /lib/x86_64-linux-gnu/libc.so.6(+0x94ac3) [0x7f2ac576bac3] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 15: /lib/x86_64-linux-gnu/libc.so.6(+0x1268d0) [0x7f2ac57fd8d0] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr:2026-03-21T13:00:29.827+0000 7f2abfb51640 -1 *** Caught signal (Aborted) ** 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: in thread 7f2abfb51640 thread_name:safe_timer 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - None) 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 1: /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x7f2ac5719520] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 2: pthread_kill() 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 3: raise() 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 4: abort() 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x17f) [0x7f2ac60755fc] 2026-03-21T13:00:29.836 INFO:tasks.ceph.mon.a.vm01.stderr: 6: ceph-mon(+0x26c3a0) [0x557c482893a0] 2026-03-21T13:00:29.837 INFO:tasks.ceph.mon.a.vm01.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x966) [0x557c483fe196] 2026-03-21T13:00:29.837 INFO:tasks.ceph.mon.a.vm01.stderr: 8: (Paxos::propose_pending()+0x13f) [0x557c483febdf] 2026-03-21T13:00:29.837 INFO:tasks.ceph.mon.a.vm01.stderr: 9: (Paxos::trigger_propose()+0x146) [0x557c483ff776] 2026-03-21T13:00:29.837 INFO:tasks.ceph.mon.a.vm01.stderr: 10: (PaxosService::propose_pending()+0x288) [0x557c48407f08] 2026-03-21T13:00:29.837 INFO:tasks.ceph.mon.a.vm01.stderr: 11: ceph-mon(+0x26c6ad) [0x557c482896ad] 2026-03-21T13:00:29.837 INFO:tasks.ceph.mon.a.vm01.stderr: 12: (CommonSafeTimer::timer_thread()+0x124) [0x7f2ac617dcc4] 2026-03-21T13:00:29.837 INFO:tasks.ceph.mon.a.vm01.stderr: 13: /usr/lib/ceph/libceph-common.so.2(+0x295751) [0x7f2ac617e751] 2026-03-21T13:00:29.837 INFO:tasks.ceph.mon.a.vm01.stderr: 14: /lib/x86_64-linux-gnu/libc.so.6(+0x94ac3) [0x7f2ac576bac3] 2026-03-21T13:00:29.837 INFO:tasks.ceph.mon.a.vm01.stderr: 15: /lib/x86_64-linux-gnu/libc.so.6(+0x1268d0) [0x7f2ac57fd8d0] 2026-03-21T13:00:29.837 INFO:tasks.ceph.mon.a.vm01.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-21T13:00:29.837 INFO:tasks.ceph.mon.a.vm01.stderr: 2026-03-21T13:00:29.837 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.850 INFO:tasks.ceph.mon.a.vm01.stderr: -2> 2026-03-21T13:00:29.827+0000 7f2abfb51640 -1 rocksdb: submit_common error: IO error: No space left on device: While appending to file: /var/lib/ceph/mon/ceph-a/store.db/000038.sst: No space left on device code =  Rocksdb transaction: 2026-03-21T13:00:29.850 INFO:tasks.ceph.mon.a.vm01.stderr:PutCF( prefix = paxos key = '1954' value size = 4672) 2026-03-21T13:00:29.850 INFO:tasks.ceph.mon.a.vm01.stderr:PutCF( prefix = paxos key = 'pending_v' value size = 8) 2026-03-21T13:00:29.850 INFO:tasks.ceph.mon.a.vm01.stderr:PutCF( prefix = paxos key = 'pending_pn' value size = 8) 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: -1> 2026-03-21T13:00:29.827+0000 7f2abfb51640 -1 ./src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef)' thread 7f2abfb51640 time 2026-03-21T13:00:29.833942+0000 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr:./src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - None) 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc7) [0x7f2ac6075544] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 2: ceph-mon(+0x26c3a0) [0x557c482893a0] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x966) [0x557c483fe196] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 4: (Paxos::propose_pending()+0x13f) [0x557c483febdf] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 5: (Paxos::trigger_propose()+0x146) [0x557c483ff776] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 6: (PaxosService::propose_pending()+0x288) [0x557c48407f08] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 7: ceph-mon(+0x26c6ad) [0x557c482896ad] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 8: (CommonSafeTimer::timer_thread()+0x124) [0x7f2ac617dcc4] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 9: /usr/lib/ceph/libceph-common.so.2(+0x295751) [0x7f2ac617e751] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 10: /lib/x86_64-linux-gnu/libc.so.6(+0x94ac3) [0x7f2ac576bac3] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 11: /lib/x86_64-linux-gnu/libc.so.6(+0x1268d0) [0x7f2ac57fd8d0] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 0> 2026-03-21T13:00:29.827+0000 7f2abfb51640 -1 *** Caught signal (Aborted) ** 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: in thread 7f2abfb51640 thread_name:safe_timer 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - None) 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 1: /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x7f2ac5719520] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 2: pthread_kill() 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 3: raise() 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 4: abort() 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x17f) [0x7f2ac60755fc] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 6: ceph-mon(+0x26c3a0) [0x557c482893a0] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x966) [0x557c483fe196] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 8: (Paxos::propose_pending()+0x13f) [0x557c483febdf] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 9: (Paxos::trigger_propose()+0x146) [0x557c483ff776] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 10: (PaxosService::propose_pending()+0x288) [0x557c48407f08] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 11: ceph-mon(+0x26c6ad) [0x557c482896ad] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 12: (CommonSafeTimer::timer_thread()+0x124) [0x7f2ac617dcc4] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 13: /usr/lib/ceph/libceph-common.so.2(+0x295751) [0x7f2ac617e751] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 14: /lib/x86_64-linux-gnu/libc.so.6(+0x94ac3) [0x7f2ac576bac3] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 15: /lib/x86_64-linux-gnu/libc.so.6(+0x1268d0) [0x7f2ac57fd8d0] 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-21T13:00:29.853 INFO:tasks.ceph.mon.a.vm01.stderr: 2026-03-21T13:00:29.854 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.854 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.854 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.854 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.854 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.858 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.858 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.859 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.866 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.867 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.867 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.867 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.867 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.867 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.867 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.867 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.867 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.867 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.867 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.867 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.867 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:problem writing to /var/log/ceph/ceph-mon.a.log: (28) No space left on device 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr: -9999> 2026-03-21T13:00:29.827+0000 7f2abfb51640 -1 rocksdb: submit_common error: IO error: No space left on device: While appending to file: /var/lib/ceph/mon/ceph-a/store.db/000038.sst: No space left on device code =  Rocksdb transaction: 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:PutCF( prefix = paxos key = '1954' value size = 4672) 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:PutCF( prefix = paxos key = 'pending_v' value size = 8) 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr:PutCF( prefix = paxos key = 'pending_pn' value size = 8) 2026-03-21T13:00:29.869 INFO:tasks.ceph.mon.a.vm01.stderr: -9998> 2026-03-21T13:00:29.827+0000 7f2abfb51640 -1 ./src/mon/MonitorDBStore.h: In function 'int MonitorDBStore::apply_transaction(MonitorDBStore::TransactionRef)' thread 7f2abfb51640 time 2026-03-21T13:00:29.833942+0000 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr:./src/mon/MonitorDBStore.h: 356: ceph_abort_msg("failed to write to db") 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - None) 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 1: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0xc7) [0x7f2ac6075544] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 2: ceph-mon(+0x26c3a0) [0x557c482893a0] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 3: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x966) [0x557c483fe196] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 4: (Paxos::propose_pending()+0x13f) [0x557c483febdf] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 5: (Paxos::trigger_propose()+0x146) [0x557c483ff776] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 6: (PaxosService::propose_pending()+0x288) [0x557c48407f08] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 7: ceph-mon(+0x26c6ad) [0x557c482896ad] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 8: (CommonSafeTimer::timer_thread()+0x124) [0x7f2ac617dcc4] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 9: /usr/lib/ceph/libceph-common.so.2(+0x295751) [0x7f2ac617e751] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 10: /lib/x86_64-linux-gnu/libc.so.6(+0x94ac3) [0x7f2ac576bac3] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 11: /lib/x86_64-linux-gnu/libc.so.6(+0x1268d0) [0x7f2ac57fd8d0] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: -9997> 2026-03-21T13:00:29.827+0000 7f2abfb51640 -1 *** Caught signal (Aborted) ** 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: in thread 7f2abfb51640 thread_name:safe_timer 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: ceph version 20.2.0-712-g70f8415b (70f8415b300f041766fa27faf7d5472699e32388) tentacle (stable - None) 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 1: /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x7f2ac5719520] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 2: pthread_kill() 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 3: raise() 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 4: abort() 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 5: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string, std::allocator > const&)+0x17f) [0x7f2ac60755fc] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 6: ceph-mon(+0x26c3a0) [0x557c482893a0] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 7: (Paxos::begin(ceph::buffer::v15_2_0::list&)+0x966) [0x557c483fe196] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 8: (Paxos::propose_pending()+0x13f) [0x557c483febdf] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 9: (Paxos::trigger_propose()+0x146) [0x557c483ff776] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 10: (PaxosService::propose_pending()+0x288) [0x557c48407f08] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 11: ceph-mon(+0x26c6ad) [0x557c482896ad] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 12: (CommonSafeTimer::timer_thread()+0x124) [0x7f2ac617dcc4] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 13: /usr/lib/ceph/libceph-common.so.2(+0x295751) [0x7f2ac617e751] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 14: /lib/x86_64-linux-gnu/libc.so.6(+0x94ac3) [0x7f2ac576bac3] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 15: /lib/x86_64-linux-gnu/libc.so.6(+0x1268d0) [0x7f2ac57fd8d0] 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: NOTE: a copy of the executable, or `objdump -rdS ` is needed to interpret this. 2026-03-21T13:00:29.870 INFO:tasks.ceph.mon.a.vm01.stderr: 2026-03-21T13:00:30.044 INFO:tasks.ceph.mon.a.vm01.stderr:daemon-helper: command crashed with signal 6 2026-03-21T13:00:30.232 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~0s 2026-03-21T13:00:34.192 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T13:00:34.196 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T13:00:34.210 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T13:00:34.216 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T13:00:35.634 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~5s 2026-03-21T13:00:41.036 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~11s 2026-03-21T13:00:46.439 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~16s 2026-03-21T13:00:51.841 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~22s 2026-03-21T13:00:57.244 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~27s 2026-03-21T13:01:02.645 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~32s 2026-03-21T13:01:08.048 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~38s 2026-03-21T13:01:13.451 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~43s 2026-03-21T13:01:18.853 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~49s 2026-03-21T13:01:24.255 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~54s 2026-03-21T13:01:29.657 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~59s 2026-03-21T13:01:35.059 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~65s 2026-03-21T13:01:40.462 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~70s 2026-03-21T13:01:42.329 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T13:01:42.323+0000 7f4e5cdc1640 -1 reset not still connected to 0x5565ef75d380 2026-03-21T13:01:45.864 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~76s 2026-03-21T13:01:51.267 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~81s 2026-03-21T13:01:56.669 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~86s 2026-03-21T13:02:02.071 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~92s 2026-03-21T13:02:07.474 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~97s 2026-03-21T13:02:12.876 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~103s 2026-03-21T13:02:18.279 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~108s 2026-03-21T13:02:23.681 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~113s 2026-03-21T13:02:29.084 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~119s 2026-03-21T13:02:34.486 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~124s 2026-03-21T13:02:39.889 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~130s 2026-03-21T13:02:45.291 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~135s 2026-03-21T13:02:50.693 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~140s 2026-03-21T13:02:56.095 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~146s 2026-03-21T13:03:01.498 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~151s 2026-03-21T13:03:06.899 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~157s 2026-03-21T13:03:12.302 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~162s 2026-03-21T13:03:17.704 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~167s 2026-03-21T13:03:23.106 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~173s 2026-03-21T13:03:28.509 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~178s 2026-03-21T13:03:33.912 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~184s 2026-03-21T13:03:38.481 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.475+0000 7f5cde659640 -1 reset not still connected to 0x5605781ec680 2026-03-21T13:03:38.486 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.479+0000 7f5cde659640 -1 reset not still connected to 0x5605789129c0 2026-03-21T13:03:38.486 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.479+0000 7f5cde659640 -1 reset not still connected to 0x560578d76ea0 2026-03-21T13:03:38.486 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.479+0000 7f5cde659640 -1 reset not still connected to 0x560578fdc680 2026-03-21T13:03:38.486 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.479+0000 7f5cde659640 -1 reset not still connected to 0x560578fdd110 2026-03-21T13:03:38.486 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.479+0000 7f5cde659640 -1 reset not still connected to 0x560579404820 2026-03-21T13:03:38.486 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.479+0000 7f5cde659640 -1 reset not still connected to 0x560579668000 2026-03-21T13:03:38.486 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.479+0000 7f5cde659640 -1 reset not still connected to 0x560579904f70 2026-03-21T13:03:38.486 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.479+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1520 2026-03-21T13:03:38.486 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.479+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1ba0 2026-03-21T13:03:38.486 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.479+0000 7f5cde659640 -1 reset not still connected to 0x560579c57ba0 2026-03-21T13:03:38.486 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.479+0000 7f5cde659640 -1 reset not still connected to 0x560579cd1ad0 2026-03-21T13:03:38.486 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.479+0000 7f5cde659640 -1 reset not still connected to 0x56057a14ed00 2026-03-21T13:03:38.486 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:38.479+0000 7f5cde659640 -1 reset not still connected to 0x56057a6c7930 2026-03-21T13:03:39.127 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:03:39.111+0000 7f5cde659640 -1 reset not still connected to 0x5605781ec680 2026-03-21T13:03:39.315 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~189s 2026-03-21T13:03:44.718 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~194s 2026-03-21T13:03:50.120 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~200s 2026-03-21T13:03:55.523 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~205s 2026-03-21T13:04:00.926 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~211s 2026-03-21T13:04:06.328 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~216s 2026-03-21T13:04:11.730 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~221s 2026-03-21T13:04:17.133 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~227s 2026-03-21T13:04:22.535 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~232s 2026-03-21T13:04:27.938 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~238s 2026-03-21T13:04:33.340 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~243s 2026-03-21T13:04:38.742 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~249s 2026-03-21T13:04:44.144 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~254s 2026-03-21T13:04:46.480 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:04:46.471+0000 7f5cde659640 -1 reset not still connected to 0x560578fdd110 2026-03-21T13:04:46.480 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:04:46.471+0000 7f5cde659640 -1 reset not still connected to 0x560579404820 2026-03-21T13:04:46.483 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:04:46.475+0000 7f5cde659640 -1 reset not still connected to 0x560579668000 2026-03-21T13:04:46.483 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:04:46.475+0000 7f5cde659640 -1 reset not still connected to 0x560579904f70 2026-03-21T13:04:46.500 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:04:46.495+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1520 2026-03-21T13:04:46.500 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:04:46.495+0000 7f5cde659640 -1 reset not still connected to 0x560579aa1ba0 2026-03-21T13:04:46.500 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:04:46.495+0000 7f5cde659640 -1 reset not still connected to 0x560579c57ba0 2026-03-21T13:04:46.500 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:04:46.495+0000 7f5cde659640 -1 reset not still connected to 0x560579cd1ad0 2026-03-21T13:04:46.500 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:04:46.495+0000 7f5cde659640 -1 reset not still connected to 0x56057a14ed00 2026-03-21T13:04:46.500 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:04:46.495+0000 7f5cde659640 -1 reset not still connected to 0x56057a6c7930 2026-03-21T13:04:49.547 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~259s 2026-03-21T13:04:54.949 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~265s 2026-03-21T13:05:00.352 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~270s 2026-03-21T13:05:05.754 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~276s 2026-03-21T13:05:11.156 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~281s 2026-03-21T13:05:16.558 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~286s 2026-03-21T13:05:21.961 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~292s 2026-03-21T13:05:27.363 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~297s 2026-03-21T13:05:32.765 INFO:tasks.daemonwatchdog.daemon_watchdog:daemon ceph.mon.a is failed for ~303s 2026-03-21T13:05:32.765 INFO:tasks.daemonwatchdog.daemon_watchdog:BARK! unmounting mounts and killing all daemons 2026-03-21T13:05:33.167 INFO:tasks.ceph.osd.0:Sent signal 15 2026-03-21T13:05:33.167 INFO:tasks.ceph.osd.1:Sent signal 15 2026-03-21T13:05:33.167 INFO:tasks.ceph.osd.2:Sent signal 15 2026-03-21T13:05:33.168 INFO:tasks.ceph.mgr.x:Sent signal 15 2026-03-21T13:05:33.168 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T13:05:33.163+0000 7f4e676da640 -1 received signal: Terminated from /usr/bin/python3 /usr/bin/daemon-helper kill ceph-osd -f --cluster ceph -i 0 (PID: 22922) UID: 0 2026-03-21T13:05:33.168 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T13:05:33.163+0000 7f4e676da640 -1 osd.0 92 *** Got signal Terminated *** 2026-03-21T13:05:33.168 INFO:tasks.ceph.osd.0.vm01.stderr:2026-03-21T13:05:33.163+0000 7f4e676da640 -1 osd.0 92 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-21T13:05:33.168 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:05:33.163+0000 7efe6c518640 -1 received signal: Terminated from /usr/bin/python3 /usr/bin/daemon-helper kill ceph-mgr -f --cluster ceph -i x (PID: 22730) UID: 0 2026-03-21T13:05:33.168 INFO:tasks.ceph.mgr.x.vm01.stderr:2026-03-21T13:05:33.163+0000 7efe6c518640 -1 mgr handle_mgr_signal *** Got signal Terminated *** 2026-03-21T13:05:33.171 INFO:tasks.ceph.osd.2.vm01.stderr:2026-03-21T13:05:33.163+0000 7fb17d004640 -1 received signal: Terminated from /usr/bin/python3 /usr/bin/daemon-helper kill ceph-osd -f --cluster ceph -i 2 (PID: 22923) UID: 0 2026-03-21T13:05:33.171 INFO:tasks.ceph.osd.2.vm01.stderr:2026-03-21T13:05:33.163+0000 7fb17d004640 -1 osd.2 92 *** Got signal Terminated *** 2026-03-21T13:05:33.171 INFO:tasks.ceph.osd.2.vm01.stderr:2026-03-21T13:05:33.163+0000 7fb17d004640 -1 osd.2 92 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-21T13:05:33.172 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:05:33.163+0000 7f5ce8f72640 -1 received signal: Terminated from /usr/bin/python3 /usr/bin/daemon-helper kill ceph-osd -f --cluster ceph -i 1 (PID: 22921) UID: 0 2026-03-21T13:05:33.172 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:05:33.163+0000 7f5ce8f72640 -1 osd.1 92 *** Got signal Terminated *** 2026-03-21T13:05:33.172 INFO:tasks.ceph.osd.1.vm01.stderr:2026-03-21T13:05:33.163+0000 7f5ce8f72640 -1 osd.1 92 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-21T13:05:34.333 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T13:05:34.327+0000 7f142092e640 0 monclient(hunting): authenticate timed out after 300 2026-03-21T13:05:34.333 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.34844.log: (28) No space left on device 2026-03-21T13:05:34.334 INFO:tasks.workunit.client.0.vm01.stderr:[errno 110] RADOS timed out (error connecting to the cluster) 2026-03-21T13:05:34.338 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR= 2026-03-21T13:05:34.338 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T13:05:34.339 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774098334 2026-03-21T13:05:34.339 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T13:05:34.339 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n '' ]] 2026-03-21T13:05:34.339 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T13:05:44.341 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T13:05:44.341 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T13:05:44.341 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T13:05:44.341 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T13:10:44.406 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T13:10:44.403+0000 7fc934c27640 0 monclient(hunting): authenticate timed out after 300 2026-03-21T13:10:44.406 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.34876.log: (28) No space left on device 2026-03-21T13:10:44.406 INFO:tasks.workunit.client.0.vm01.stderr:[errno 110] RADOS timed out (error connecting to the cluster) 2026-03-21T13:10:44.409 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR= 2026-03-21T13:10:44.409 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T13:10:44.410 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774098644 2026-03-21T13:10:44.410 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T13:10:44.410 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n '' ]] 2026-03-21T13:10:44.410 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T13:10:54.411 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T13:10:54.412 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T13:10:54.412 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T13:10:54.412 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T13:15:54.473 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T13:15:54.468+0000 7f35367e8640 0 monclient(hunting): authenticate timed out after 300 2026-03-21T13:15:54.473 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.34894.log: (28) No space left on device 2026-03-21T13:15:54.473 INFO:tasks.workunit.client.0.vm01.stderr:[errno 110] RADOS timed out (error connecting to the cluster) 2026-03-21T13:15:54.476 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR= 2026-03-21T13:15:54.476 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T13:15:54.477 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774098954 2026-03-21T13:15:54.477 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T13:15:54.477 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n '' ]] 2026-03-21T13:15:54.477 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T13:16:04.479 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T13:16:04.479 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T13:16:04.479 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T13:16:04.479 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T13:21:04.542 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T13:21:04.536+0000 7fc6ec752640 0 monclient(hunting): authenticate timed out after 300 2026-03-21T13:21:04.542 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.34915.log: (28) No space left on device 2026-03-21T13:21:04.542 INFO:tasks.workunit.client.0.vm01.stderr:[errno 110] RADOS timed out (error connecting to the cluster) 2026-03-21T13:21:04.545 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR= 2026-03-21T13:21:04.545 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T13:21:04.545 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774099264 2026-03-21T13:21:04.546 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T13:21:04.546 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n '' ]] 2026-03-21T13:21:04.546 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T13:21:14.547 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T13:21:14.547 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T13:21:14.547 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T13:21:14.547 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T13:26:14.608 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T13:26:14.600+0000 7fad9b11e640 0 monclient(hunting): authenticate timed out after 300 2026-03-21T13:26:14.608 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.34936.log: (28) No space left on device 2026-03-21T13:26:14.608 INFO:tasks.workunit.client.0.vm01.stderr:[errno 110] RADOS timed out (error connecting to the cluster) 2026-03-21T13:26:14.611 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR= 2026-03-21T13:26:14.611 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T13:26:14.612 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774099574 2026-03-21T13:26:14.612 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T13:26:14.612 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n '' ]] 2026-03-21T13:26:14.612 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T13:26:24.613 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T13:26:24.613 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T13:26:24.613 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T13:26:24.613 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T13:31:24.683 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T13:31:24.677+0000 7fca6609a640 0 monclient(hunting): authenticate timed out after 300 2026-03-21T13:31:24.683 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.34960.log: (28) No space left on device 2026-03-21T13:31:24.683 INFO:tasks.workunit.client.0.vm01.stderr:[errno 110] RADOS timed out (error connecting to the cluster) 2026-03-21T13:31:24.686 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR= 2026-03-21T13:31:24.686 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T13:31:24.687 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774099884 2026-03-21T13:31:24.687 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T13:31:24.687 INFO:tasks.workunit.client.0.vm01.stderr:+ [[ -n '' ]] 2026-03-21T13:31:24.687 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T13:31:34.688 INFO:tasks.workunit.client.0.vm01.stderr:++ ceph mgr dump 2026-03-21T13:31:34.689 INFO:tasks.workunit.client.0.vm01.stderr:++ jq '.active_clients[]' 2026-03-21T13:31:34.689 INFO:tasks.workunit.client.0.vm01.stderr:++ jq 'select(.name == "rbd_support")' 2026-03-21T13:31:34.689 INFO:tasks.workunit.client.0.vm01.stderr:++ jq -r '[.addrvec[0].addr, "/", .addrvec[0].nonce|tostring] | add' 2026-03-21T13:36:34.755 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T13:36:34.753+0000 7f7f3cc7f640 0 monclient(hunting): authenticate timed out after 300 2026-03-21T13:36:34.755 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.34978.log: (28) No space left on device 2026-03-21T13:36:34.755 INFO:tasks.workunit.client.0.vm01.stderr:[errno 110] RADOS timed out (error connecting to the cluster) 2026-03-21T13:36:34.758 INFO:tasks.workunit.client.0.vm01.stderr:+ CLIENT_ADDR= 2026-03-21T13:36:34.758 INFO:tasks.workunit.client.0.vm01.stderr:++ date +%s 2026-03-21T13:36:34.759 INFO:tasks.workunit.client.0.vm01.stderr:+ CURRENT_TIME=1774100194 2026-03-21T13:36:34.759 INFO:tasks.workunit.client.0.vm01.stderr:+ (( CURRENT_TIME <= END_TIME )) 2026-03-21T13:36:34.759 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i = 1 )) 2026-03-21T13:36:34.759 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 24 )) 2026-03-21T13:36:34.759 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image1 2m 2026-03-21T13:41:34.793 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T13:41:34.789+0000 7f821e9b8200 0 monclient(hunting): authenticate timed out after 300 2026-03-21T13:41:34.793 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.34997.log: (28) No space left on device 2026-03-21T13:41:34.793 INFO:tasks.workunit.client.0.vm01.stderr:rbd: couldn't connect to the cluster! 2026-03-21T13:41:34.797 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T13:41:44.798 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T13:41:44.798 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 24 )) 2026-03-21T13:41:44.798 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image1 2m 2026-03-21T13:46:44.816 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T13:46:44.814+0000 7f5672470200 0 monclient(hunting): authenticate timed out after 300 2026-03-21T13:46:44.816 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.35007.log: (28) No space left on device 2026-03-21T13:46:44.816 INFO:tasks.workunit.client.0.vm01.stderr:rbd: couldn't connect to the cluster! 2026-03-21T13:46:44.819 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T13:46:54.820 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T13:46:54.820 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 24 )) 2026-03-21T13:46:54.820 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image1 2m 2026-03-21T13:51:54.838 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T13:51:54.834+0000 7f36181b0200 0 monclient(hunting): authenticate timed out after 300 2026-03-21T13:51:54.838 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.35020.log: (28) No space left on device 2026-03-21T13:51:54.838 INFO:tasks.workunit.client.0.vm01.stderr:rbd: couldn't connect to the cluster! 2026-03-21T13:51:54.841 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T13:52:04.842 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T13:52:04.842 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 24 )) 2026-03-21T13:52:04.842 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image1 2m 2026-03-21T13:57:04.860 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T13:57:04.859+0000 7f83413f1200 0 monclient(hunting): authenticate timed out after 300 2026-03-21T13:57:04.860 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.35030.log: (28) No space left on device 2026-03-21T13:57:04.860 INFO:tasks.workunit.client.0.vm01.stderr:rbd: couldn't connect to the cluster! 2026-03-21T13:57:04.863 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T13:57:14.864 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T13:57:14.864 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 24 )) 2026-03-21T13:57:14.864 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image1 2m 2026-03-21T14:02:14.881 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T14:02:14.875+0000 7f57bba00200 0 monclient(hunting): authenticate timed out after 300 2026-03-21T14:02:14.881 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.35046.log: (28) No space left on device 2026-03-21T14:02:14.881 INFO:tasks.workunit.client.0.vm01.stderr:rbd: couldn't connect to the cluster! 2026-03-21T14:02:14.884 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T14:02:24.885 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T14:02:24.885 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 24 )) 2026-03-21T14:02:24.885 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image1 2m 2026-03-21T14:07:24.902 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T14:07:24.896+0000 7f68b66e7200 0 monclient(hunting): authenticate timed out after 300 2026-03-21T14:07:24.903 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.35056.log: (28) No space left on device 2026-03-21T14:07:24.903 INFO:tasks.workunit.client.0.vm01.stderr:rbd: couldn't connect to the cluster! 2026-03-21T14:07:24.905 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T14:07:34.907 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T14:07:34.907 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 24 )) 2026-03-21T14:07:34.907 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image1 2m 2026-03-21T14:12:34.928 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T14:12:34.924+0000 7f936d66b200 0 monclient(hunting): authenticate timed out after 300 2026-03-21T14:12:34.928 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.35069.log: (28) No space left on device 2026-03-21T14:12:34.928 INFO:tasks.workunit.client.0.vm01.stderr:rbd: couldn't connect to the cluster! 2026-03-21T14:12:34.931 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T14:12:44.933 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T14:12:44.933 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 24 )) 2026-03-21T14:12:44.933 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image1 2m 2026-03-21T14:17:44.952 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T14:17:44.949+0000 7f5f5ccd5200 0 monclient(hunting): authenticate timed out after 300 2026-03-21T14:17:44.953 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.35079.log: (28) No space left on device 2026-03-21T14:17:44.953 INFO:tasks.workunit.client.0.vm01.stderr:rbd: couldn't connect to the cluster! 2026-03-21T14:17:44.955 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T14:17:54.957 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T14:17:54.957 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 24 )) 2026-03-21T14:17:54.957 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image1 2m 2026-03-21T14:22:54.976 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T14:22:54.973+0000 7f46dfa69200 0 monclient(hunting): authenticate timed out after 300 2026-03-21T14:22:54.976 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.35095.log: (28) No space left on device 2026-03-21T14:22:54.976 INFO:tasks.workunit.client.0.vm01.stderr:rbd: couldn't connect to the cluster! 2026-03-21T14:22:54.978 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T14:23:04.980 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T14:23:04.980 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 24 )) 2026-03-21T14:23:04.980 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image1 2m 2026-03-21T14:28:05.005 INFO:tasks.workunit.client.0.vm01.stderr:2026-03-21T14:28:05.001+0000 7f285408c200 0 monclient(hunting): authenticate timed out after 300 2026-03-21T14:28:05.005 INFO:tasks.workunit.client.0.vm01.stderr:problem writing to /var/log/ceph/ceph-client.admin.35108.log: (28) No space left on device 2026-03-21T14:28:05.005 INFO:tasks.workunit.client.0.vm01.stderr:rbd: couldn't connect to the cluster! 2026-03-21T14:28:05.008 INFO:tasks.workunit.client.0.vm01.stderr:+ sleep 10 2026-03-21T14:28:15.009 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i++ )) 2026-03-21T14:28:15.009 INFO:tasks.workunit.client.0.vm01.stderr:+ (( i <= 24 )) 2026-03-21T14:28:15.009 INFO:tasks.workunit.client.0.vm01.stderr:+ rbd mirror snapshot schedule add -p rbd --image image1 2m 2026-03-21T14:28:48.973 DEBUG:teuthology.exit:Got signal 15; running 1 handler... 2026-03-21T14:28:48.980 DEBUG:teuthology.exit:Finished running handlers