2026-03-20T11:42:27.675 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-20T11:42:27.679 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-20T11:42:27.695 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-20_10:58:43-rgw-tentacle-none-default-vps/2075 branch: tentacle description: rgw/tools/{centos_latest cluster ignore-pg-availability tasks} email: null first_in_suite: false flavor: default job_id: '2075' last_in_suite: false machine_type: vps name: kyr-2026-03-20_10:58:43-rgw-tentacle-none-default-vps no_nested_subset: false openstack: - volumes: count: 1 size: 10 os_type: centos os_version: 9.stream overrides: admin_socket: branch: tentacle ansible.cephlab: branch: main repo: https://github.com/kshtsk/ceph-cm-ansible.git skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: logical_volumes: lv_1: scratch_dev: true size: 25%VG vg: vg_nvme lv_2: scratch_dev: true size: 25%VG vg: vg_nvme lv_3: scratch_dev: true size: 25%VG vg: vg_nvme lv_4: scratch_dev: true size: 25%VG vg: vg_nvme timezone: UTC volume_groups: vg_nvme: pvs: /dev/vdb,/dev/vdc,/dev/vdd,/dev/vde ceph: conf: client: debug ms: 1 debug rgw: 20 rgw enable static website: false mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - \(PG_AVAILABILITY\) - \(PG_DEGRADED\) - \(POOL_APP_NOT_ENABLED\) - not have an application enabled sha1: 70f8415b300f041766fa27faf7d5472699e32388 ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_binary_url: https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm install: ceph: flavor: default sha1: 70f8415b300f041766fa27faf7d5472699e32388 extra_system_packages: deb: - python3-jmespath - python3-xmltodict - s3cmd rpm: - bzip2 - perl-Test-Harness - python3-jmespath - python3-xmltodict - s3cmd rgw: frontend: beast workunit: branch: tt-tentacle sha1: 7b4fb1902b22ff4ea3f4ff6a953bc42198ebeffe owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - osd.0 - osd.1 - osd.2 - mgr.0 - client.0 seed: 7702 sha1: 70f8415b300f041766fa27faf7d5472699e32388 sleep_before_teardown: 0 suite: rgw suite_branch: tt-tentacle suite_path: /home/teuthos/src/github.com_kshtsk_ceph_7b4fb1902b22ff4ea3f4ff6a953bc42198ebeffe/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 7b4fb1902b22ff4ea3f4ff6a953bc42198ebeffe targets: vm00.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKw9avWVk91afIbXkwyFOaonigzL3YxO5+mPEVDub9AWHO0sZOEv79VavLWGHxVnTUaem9r0phN/JMfoPxaloTs= tasks: - install: null - ceph: null - rgw: client.0: dns-name: '' - workunit: clients: client.0: - rgw/test_rgw_orphan_list.sh teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-20_10:58:43 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.4188345 2026-03-20T11:42:27.695 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_7b4fb1902b22ff4ea3f4ff6a953bc42198ebeffe/qa; will attempt to use it 2026-03-20T11:42:27.695 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_7b4fb1902b22ff4ea3f4ff6a953bc42198ebeffe/qa/tasks 2026-03-20T11:42:27.696 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-20T11:42:27.696 INFO:teuthology.task.internal:Checking packages... 2026-03-20T11:42:27.696 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash '70f8415b300f041766fa27faf7d5472699e32388' 2026-03-20T11:42:27.696 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-20T11:42:27.696 INFO:teuthology.packaging:ref: None 2026-03-20T11:42:27.696 INFO:teuthology.packaging:tag: None 2026-03-20T11:42:27.696 INFO:teuthology.packaging:branch: tentacle 2026-03-20T11:42:27.696 INFO:teuthology.packaging:sha1: 70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T11:42:27.696 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=tentacle 2026-03-20T11:42:28.455 INFO:teuthology.task.internal:Found packages for ceph version 20.2.0-721.g5bb32787 2026-03-20T11:42:28.456 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-20T11:42:28.457 INFO:teuthology.task.internal:no buildpackages task found 2026-03-20T11:42:28.457 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-20T11:42:28.457 INFO:teuthology.task.internal:Saving configuration 2026-03-20T11:42:28.462 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-20T11:42:28.463 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-20T11:42:28.470 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm00.local', 'description': '/archive/kyr-2026-03-20_10:58:43-rgw-tentacle-none-default-vps/2075', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-20 11:41:51.355879', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:00', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKw9avWVk91afIbXkwyFOaonigzL3YxO5+mPEVDub9AWHO0sZOEv79VavLWGHxVnTUaem9r0phN/JMfoPxaloTs='} 2026-03-20T11:42:28.470 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-20T11:42:28.470 INFO:teuthology.task.internal:roles: ubuntu@vm00.local - ['mon.a', 'osd.0', 'osd.1', 'osd.2', 'mgr.0', 'client.0'] 2026-03-20T11:42:28.470 INFO:teuthology.run_tasks:Running task console_log... 2026-03-20T11:42:28.477 DEBUG:teuthology.task.console_log:vm00 does not support IPMI; excluding 2026-03-20T11:42:28.477 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fdfedcc0b80>, signals=[15]) 2026-03-20T11:42:28.477 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-20T11:42:28.478 INFO:teuthology.task.internal:Opening connections... 2026-03-20T11:42:28.478 DEBUG:teuthology.task.internal:connecting to ubuntu@vm00.local 2026-03-20T11:42:28.478 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T11:42:28.537 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-20T11:42:28.551 DEBUG:teuthology.orchestra.run.vm00:> uname -m 2026-03-20T11:42:28.702 INFO:teuthology.orchestra.run.vm00.stdout:x86_64 2026-03-20T11:42:28.702 DEBUG:teuthology.orchestra.run.vm00:> cat /etc/os-release 2026-03-20T11:42:28.756 INFO:teuthology.orchestra.run.vm00.stdout:NAME="CentOS Stream" 2026-03-20T11:42:28.756 INFO:teuthology.orchestra.run.vm00.stdout:VERSION="9" 2026-03-20T11:42:28.756 INFO:teuthology.orchestra.run.vm00.stdout:ID="centos" 2026-03-20T11:42:28.756 INFO:teuthology.orchestra.run.vm00.stdout:ID_LIKE="rhel fedora" 2026-03-20T11:42:28.756 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_ID="9" 2026-03-20T11:42:28.756 INFO:teuthology.orchestra.run.vm00.stdout:PLATFORM_ID="platform:el9" 2026-03-20T11:42:28.756 INFO:teuthology.orchestra.run.vm00.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-20T11:42:28.756 INFO:teuthology.orchestra.run.vm00.stdout:ANSI_COLOR="0;31" 2026-03-20T11:42:28.756 INFO:teuthology.orchestra.run.vm00.stdout:LOGO="fedora-logo-icon" 2026-03-20T11:42:28.757 INFO:teuthology.orchestra.run.vm00.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-20T11:42:28.757 INFO:teuthology.orchestra.run.vm00.stdout:HOME_URL="https://centos.org/" 2026-03-20T11:42:28.757 INFO:teuthology.orchestra.run.vm00.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-20T11:42:28.757 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-20T11:42:28.757 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-20T11:42:28.757 INFO:teuthology.lock.ops:Updating vm00.local on lock server 2026-03-20T11:42:28.762 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-20T11:42:28.764 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-20T11:42:28.765 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-20T11:42:28.765 DEBUG:teuthology.orchestra.run.vm00:> test '!' -e /home/ubuntu/cephtest 2026-03-20T11:42:28.811 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-20T11:42:28.812 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-20T11:42:28.812 DEBUG:teuthology.orchestra.run.vm00:> test -z $(ls -A /var/lib/ceph) 2026-03-20T11:42:28.867 INFO:teuthology.orchestra.run.vm00.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-20T11:42:28.867 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-20T11:42:28.876 DEBUG:teuthology.orchestra.run.vm00:> test -e /ceph-qa-ready 2026-03-20T11:42:28.923 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T11:42:29.108 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-20T11:42:29.109 INFO:teuthology.task.internal:Creating test directory... 2026-03-20T11:42:29.110 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-20T11:42:29.125 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-20T11:42:29.126 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-20T11:42:29.127 INFO:teuthology.task.internal:Creating archive directory... 2026-03-20T11:42:29.127 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-20T11:42:29.182 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-20T11:42:29.184 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-20T11:42:29.184 DEBUG:teuthology.orchestra.run.vm00:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-20T11:42:29.236 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T11:42:29.236 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-20T11:42:29.301 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T11:42:29.310 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-20T11:42:29.312 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-20T11:42:29.313 INFO:teuthology.task.internal:Configuring sudo... 2026-03-20T11:42:29.313 DEBUG:teuthology.orchestra.run.vm00:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-20T11:42:29.377 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-20T11:42:29.379 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-20T11:42:29.379 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-20T11:42:29.433 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-20T11:42:29.497 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-20T11:42:29.554 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:42:29.554 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-20T11:42:29.613 DEBUG:teuthology.orchestra.run.vm00:> sudo service rsyslog restart 2026-03-20T11:42:29.681 INFO:teuthology.orchestra.run.vm00.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-20T11:42:29.988 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-20T11:42:29.989 INFO:teuthology.task.internal:Starting timer... 2026-03-20T11:42:29.989 INFO:teuthology.run_tasks:Running task pcp... 2026-03-20T11:42:29.992 INFO:teuthology.run_tasks:Running task selinux... 2026-03-20T11:42:29.999 INFO:teuthology.task.selinux:Excluding vm00: VMs are not yet supported 2026-03-20T11:42:29.999 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-20T11:42:29.999 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-20T11:42:29.999 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-20T11:42:29.999 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-20T11:42:30.012 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}} 2026-03-20T11:42:30.012 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main to origin/main 2026-03-20T11:42:30.018 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-20T11:42:30.018 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "logical_volumes": {"lv_1": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_2": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_3": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}, "lv_4": {"scratch_dev": true, "size": "25%VG", "vg": "vg_nvme"}}, "timezone": "UTC", "volume_groups": {"vg_nvme": {"pvs": "/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde"}}}' -i /tmp/teuth_ansible_inventorys47hi5ii --limit vm00.local /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-20T11:44:07.161 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm00.local')] 2026-03-20T11:44:07.161 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm00.local' 2026-03-20T11:44:07.162 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-20T11:44:07.225 DEBUG:teuthology.orchestra.run.vm00:> true 2026-03-20T11:44:07.306 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm00.local' 2026-03-20T11:44:07.306 INFO:teuthology.run_tasks:Running task clock... 2026-03-20T11:44:07.309 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-20T11:44:07.309 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-20T11:44:07.309 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-20T11:44:07.378 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-20T11:44:07.396 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-20T11:44:07.424 INFO:teuthology.orchestra.run.vm00.stderr:sudo: ntpd: command not found 2026-03-20T11:44:07.435 INFO:teuthology.orchestra.run.vm00.stdout:506 Cannot talk to daemon 2026-03-20T11:44:07.450 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-20T11:44:07.464 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-20T11:44:07.515 INFO:teuthology.orchestra.run.vm00.stderr:bash: line 1: ntpq: command not found 2026-03-20T11:44:07.517 INFO:teuthology.orchestra.run.vm00.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-20T11:44:07.517 INFO:teuthology.orchestra.run.vm00.stdout:=============================================================================== 2026-03-20T11:44:07.517 INFO:teuthology.run_tasks:Running task install... 2026-03-20T11:44:07.519 DEBUG:teuthology.task.install:project ceph 2026-03-20T11:44:07.519 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-03-20T11:44:07.519 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}} 2026-03-20T11:44:07.519 INFO:teuthology.task.install:Using flavor: default 2026-03-20T11:44:07.522 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-20T11:44:07.522 INFO:teuthology.task.install:extra packages: [] 2026-03-20T11:44:07.522 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'tag': None, 'wait_for_package': False} 2026-03-20T11:44:07.522 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T11:44:08.118 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/ 2026-03-20T11:44:08.118 INFO:teuthology.task.install.rpm:Package version is 20.2.0-712.g70f8415b 2026-03-20T11:44:08.665 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/tentacle/70f8415b300f041766fa27faf7d5472699e32388/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-20T11:44:08.665 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:44:08.665 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-20T11:44:08.695 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-jmespath, python3-xmltodict, s3cmd on remote rpm x86_64 2026-03-20T11:44:08.695 DEBUG:teuthology.orchestra.run.vm00:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/70f8415b300f041766fa27faf7d5472699e32388/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-20T11:44:08.768 DEBUG:teuthology.orchestra.run.vm00:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-20T11:44:08.855 DEBUG:teuthology.orchestra.run.vm00:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-20T11:44:08.886 INFO:teuthology.orchestra.run.vm00.stdout:check_obsoletes = 1 2026-03-20T11:44:08.887 DEBUG:teuthology.orchestra.run.vm00:> sudo yum clean all 2026-03-20T11:44:09.069 INFO:teuthology.orchestra.run.vm00.stdout:41 files removed 2026-03-20T11:44:09.100 DEBUG:teuthology.orchestra.run.vm00:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-jmespath python3-xmltodict s3cmd 2026-03-20T11:44:10.413 INFO:teuthology.orchestra.run.vm00.stdout:ceph packages for x86_64 77 kB/s | 87 kB 00:01 2026-03-20T11:44:11.483 INFO:teuthology.orchestra.run.vm00.stdout:ceph noarch packages 17 kB/s | 18 kB 00:01 2026-03-20T11:44:12.472 INFO:teuthology.orchestra.run.vm00.stdout:ceph source packages 2.0 kB/s | 1.9 kB 00:00 2026-03-20T11:44:13.194 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - BaseOS 13 MB/s | 8.9 MB 00:00 2026-03-20T11:44:15.172 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - AppStream 20 MB/s | 27 MB 00:01 2026-03-20T11:44:18.962 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - CRB 8.2 MB/s | 8.0 MB 00:00 2026-03-20T11:44:20.469 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - Extras packages 31 kB/s | 20 kB 00:00 2026-03-20T11:44:22.353 INFO:teuthology.orchestra.run.vm00.stdout:Extra Packages for Enterprise Linux 11 MB/s | 20 MB 00:01 2026-03-20T11:44:27.024 INFO:teuthology.orchestra.run.vm00.stdout:lab-extras 65 kB/s | 50 kB 00:00 2026-03-20T11:44:28.362 INFO:teuthology.orchestra.run.vm00.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T11:44:28.362 INFO:teuthology.orchestra.run.vm00.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-20T11:44:28.394 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout:====================================================================================== 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout:====================================================================================== 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout:Installing: 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: bzip2 x86_64 1.0.8-11.el9 baseos 55 k 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: ceph x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.5 k 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.9 M 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 939 k 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache x86_64 2:20.2.0-712.g70f8415b.el9 ceph 154 k 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr x86_64 2:20.2.0-712.g70f8415b.el9 ceph 962 k 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 173 k 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 11 M 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 7.4 M 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 50 k 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test x86_64 2:20.2.0-712.g70f8415b.el9 ceph 84 M 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 298 k 2026-03-20T11:44:28.398 INFO:teuthology.orchestra.run.vm00.stdout: cephadm noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 1.0 M 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 34 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 866 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel x86_64 2:20.2.0-712.g70f8415b.el9 ceph 126 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: perl-Test-Harness noarch 1:3.42-461.el9 appstream 295 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs x86_64 2:20.2.0-712.g70f8415b.el9 ceph 163 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados x86_64 2:20.2.0-712.g70f8415b.el9 ceph 324 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 304 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw x86_64 2:20.2.0-712.g70f8415b.el9 ceph 99 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 91 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.9 M 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 180 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: s3cmd noarch 2.4.0-1.el9 epel 206 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout:Upgrading: 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: librados2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 3.5 M 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: librbd1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.8 M 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout:Installing dependencies: 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 M 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 43 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds x86_64 2:20.2.0-712.g70f8415b.el9 ceph 2.3 M 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 290 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon x86_64 2:20.2.0-712.g70f8415b.el9 ceph 5.0 M 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd x86_64 2:20.2.0-712.g70f8415b.el9 ceph 17 M 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts noarch 2:20.2.0-712.g70f8415b.el9 ceph-noarch 17 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux x86_64 2:20.2.0-712.g70f8415b.el9 ceph 25 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: fuse x86_64 2.9.9-17.el9 baseos 80 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-proxy2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 24 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite x86_64 2:20.2.0-712.g70f8415b.el9 ceph 164 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 250 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: librgw2 x86_64 2:20.2.0-712.g70f8415b.el9 ceph 6.4 M 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-20T11:44:28.399 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: perl-Benchmark noarch 1.23-483.el9 appstream 26 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse x86_64 2:20.2.0-712.g70f8415b.el9 ceph 45 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common x86_64 2:20.2.0-712.g70f8415b.el9 ceph 175 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-20T11:44:28.400 INFO:teuthology.orchestra.run.vm00.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-20T11:44:28.401 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-20T11:44:28.401 INFO:teuthology.orchestra.run.vm00.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-20T11:44:28.401 INFO:teuthology.orchestra.run.vm00.stdout:Installing weak dependencies: 2026-03-20T11:44:28.401 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-20T11:44:28.401 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:44:28.401 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-20T11:44:28.401 INFO:teuthology.orchestra.run.vm00.stdout:====================================================================================== 2026-03-20T11:44:28.401 INFO:teuthology.orchestra.run.vm00.stdout:Install 136 Packages 2026-03-20T11:44:28.401 INFO:teuthology.orchestra.run.vm00.stdout:Upgrade 2 Packages 2026-03-20T11:44:28.401 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:44:28.401 INFO:teuthology.orchestra.run.vm00.stdout:Total download size: 267 M 2026-03-20T11:44:28.401 INFO:teuthology.orchestra.run.vm00.stdout:Downloading Packages: 2026-03-20T11:44:29.696 INFO:teuthology.orchestra.run.vm00.stdout:(1/138): ceph-20.2.0-712.g70f8415b.el9.x86_64.r 13 kB/s | 6.5 kB 00:00 2026-03-20T11:44:30.548 INFO:teuthology.orchestra.run.vm00.stdout:(2/138): ceph-fuse-20.2.0-712.g70f8415b.el9.x86 1.1 MB/s | 939 kB 00:00 2026-03-20T11:44:30.671 INFO:teuthology.orchestra.run.vm00.stdout:(3/138): ceph-immutable-object-cache-20.2.0-712 1.2 MB/s | 154 kB 00:00 2026-03-20T11:44:31.286 INFO:teuthology.orchestra.run.vm00.stdout:(4/138): ceph-mds-20.2.0-712.g70f8415b.el9.x86_ 3.8 MB/s | 2.3 MB 00:00 2026-03-20T11:44:31.342 INFO:teuthology.orchestra.run.vm00.stdout:(5/138): ceph-base-20.2.0-712.g70f8415b.el9.x86 2.8 MB/s | 5.9 MB 00:02 2026-03-20T11:44:31.535 INFO:teuthology.orchestra.run.vm00.stdout:(6/138): ceph-mgr-20.2.0-712.g70f8415b.el9.x86_ 3.8 MB/s | 962 kB 00:00 2026-03-20T11:44:32.179 INFO:teuthology.orchestra.run.vm00.stdout:(7/138): ceph-mon-20.2.0-712.g70f8415b.el9.x86_ 6.0 MB/s | 5.0 MB 00:00 2026-03-20T11:44:33.487 INFO:teuthology.orchestra.run.vm00.stdout:(8/138): ceph-common-20.2.0-712.g70f8415b.el9.x 5.6 MB/s | 24 MB 00:04 2026-03-20T11:44:33.602 INFO:teuthology.orchestra.run.vm00.stdout:(9/138): ceph-selinux-20.2.0-712.g70f8415b.el9. 219 kB/s | 25 kB 00:00 2026-03-20T11:44:33.782 INFO:teuthology.orchestra.run.vm00.stdout:(10/138): ceph-osd-20.2.0-712.g70f8415b.el9.x86 7.6 MB/s | 17 MB 00:02 2026-03-20T11:44:33.904 INFO:teuthology.orchestra.run.vm00.stdout:(11/138): libcephfs-devel-20.2.0-712.g70f8415b. 283 kB/s | 34 kB 00:00 2026-03-20T11:44:34.026 INFO:teuthology.orchestra.run.vm00.stdout:(12/138): libcephfs-proxy2-20.2.0-712.g70f8415b 198 kB/s | 24 kB 00:00 2026-03-20T11:44:34.160 INFO:teuthology.orchestra.run.vm00.stdout:(13/138): libcephfs2-20.2.0-712.g70f8415b.el9.x 6.4 MB/s | 866 kB 00:00 2026-03-20T11:44:34.284 INFO:teuthology.orchestra.run.vm00.stdout:(14/138): libcephsqlite-20.2.0-712.g70f8415b.el 1.3 MB/s | 164 kB 00:00 2026-03-20T11:44:34.407 INFO:teuthology.orchestra.run.vm00.stdout:(15/138): librados-devel-20.2.0-712.g70f8415b.e 1.0 MB/s | 126 kB 00:00 2026-03-20T11:44:34.533 INFO:teuthology.orchestra.run.vm00.stdout:(16/138): libradosstriper1-20.2.0-712.g70f8415b 2.0 MB/s | 250 kB 00:00 2026-03-20T11:44:34.609 INFO:teuthology.orchestra.run.vm00.stdout:(17/138): ceph-radosgw-20.2.0-712.g70f8415b.el9 9.7 MB/s | 24 MB 00:02 2026-03-20T11:44:34.726 INFO:teuthology.orchestra.run.vm00.stdout:(18/138): python3-ceph-argparse-20.2.0-712.g70f 381 kB/s | 45 kB 00:00 2026-03-20T11:44:34.847 INFO:teuthology.orchestra.run.vm00.stdout:(19/138): python3-ceph-common-20.2.0-712.g70f84 1.4 MB/s | 175 kB 00:00 2026-03-20T11:44:34.967 INFO:teuthology.orchestra.run.vm00.stdout:(20/138): python3-cephfs-20.2.0-712.g70f8415b.e 1.3 MB/s | 163 kB 00:00 2026-03-20T11:44:35.091 INFO:teuthology.orchestra.run.vm00.stdout:(21/138): python3-rados-20.2.0-712.g70f8415b.el 2.6 MB/s | 324 kB 00:00 2026-03-20T11:44:35.213 INFO:teuthology.orchestra.run.vm00.stdout:(22/138): python3-rbd-20.2.0-712.g70f8415b.el9. 2.4 MB/s | 304 kB 00:00 2026-03-20T11:44:35.333 INFO:teuthology.orchestra.run.vm00.stdout:(23/138): python3-rgw-20.2.0-712.g70f8415b.el9. 830 kB/s | 99 kB 00:00 2026-03-20T11:44:35.396 INFO:teuthology.orchestra.run.vm00.stdout:(24/138): librgw2-20.2.0-712.g70f8415b.el9.x86_ 7.4 MB/s | 6.4 MB 00:00 2026-03-20T11:44:35.452 INFO:teuthology.orchestra.run.vm00.stdout:(25/138): rbd-fuse-20.2.0-712.g70f8415b.el9.x86 764 kB/s | 91 kB 00:00 2026-03-20T11:44:35.609 INFO:teuthology.orchestra.run.vm00.stdout:(26/138): rbd-nbd-20.2.0-712.g70f8415b.el9.x86_ 1.1 MB/s | 180 kB 00:00 2026-03-20T11:44:35.728 INFO:teuthology.orchestra.run.vm00.stdout:(27/138): ceph-grafana-dashboards-20.2.0-712.g7 365 kB/s | 43 kB 00:00 2026-03-20T11:44:35.772 INFO:teuthology.orchestra.run.vm00.stdout:(28/138): rbd-mirror-20.2.0-712.g70f8415b.el9.x 7.8 MB/s | 2.9 MB 00:00 2026-03-20T11:44:35.850 INFO:teuthology.orchestra.run.vm00.stdout:(29/138): ceph-mgr-cephadm-20.2.0-712.g70f8415b 1.4 MB/s | 173 kB 00:00 2026-03-20T11:44:36.577 INFO:teuthology.orchestra.run.vm00.stdout:(30/138): ceph-mgr-diskprediction-local-20.2.0- 10 MB/s | 7.4 MB 00:00 2026-03-20T11:44:36.698 INFO:teuthology.orchestra.run.vm00.stdout:(31/138): ceph-mgr-modules-core-20.2.0-712.g70f 2.3 MB/s | 290 kB 00:00 2026-03-20T11:44:36.817 INFO:teuthology.orchestra.run.vm00.stdout:(32/138): ceph-mgr-rook-20.2.0-712.g70f8415b.el 422 kB/s | 50 kB 00:00 2026-03-20T11:44:36.935 INFO:teuthology.orchestra.run.vm00.stdout:(33/138): ceph-prometheus-alerts-20.2.0-712.g70 148 kB/s | 17 kB 00:00 2026-03-20T11:44:37.000 INFO:teuthology.orchestra.run.vm00.stdout:(34/138): ceph-mgr-dashboard-20.2.0-712.g70f841 8.6 MB/s | 11 MB 00:01 2026-03-20T11:44:37.056 INFO:teuthology.orchestra.run.vm00.stdout:(35/138): ceph-volume-20.2.0-712.g70f8415b.el9. 2.4 MB/s | 298 kB 00:00 2026-03-20T11:44:37.252 INFO:teuthology.orchestra.run.vm00.stdout:(36/138): cephadm-20.2.0-712.g70f8415b.el9.noar 4.0 MB/s | 1.0 MB 00:00 2026-03-20T11:44:37.396 INFO:teuthology.orchestra.run.vm00.stdout:(37/138): bzip2-1.0.8-11.el9.x86_64.rpm 161 kB/s | 55 kB 00:00 2026-03-20T11:44:37.567 INFO:teuthology.orchestra.run.vm00.stdout:(38/138): fuse-2.9.9-17.el9.x86_64.rpm 466 kB/s | 80 kB 00:00 2026-03-20T11:44:38.028 INFO:teuthology.orchestra.run.vm00.stdout:(39/138): ledmon-libs-1.1.0-3.el9.x86_64.rpm 88 kB/s | 40 kB 00:00 2026-03-20T11:44:38.184 INFO:teuthology.orchestra.run.vm00.stdout:(40/138): libconfig-1.7.2-9.el9.x86_64.rpm 463 kB/s | 72 kB 00:00 2026-03-20T11:44:38.842 INFO:teuthology.orchestra.run.vm00.stdout:(41/138): cryptsetup-2.8.1-3.el9.x86_64.rpm 221 kB/s | 351 kB 00:01 2026-03-20T11:44:39.991 INFO:teuthology.orchestra.run.vm00.stdout:(42/138): libgfortran-11.5.0-14.el9.x86_64.rpm 440 kB/s | 794 kB 00:01 2026-03-20T11:44:40.017 INFO:teuthology.orchestra.run.vm00.stdout:(43/138): mailcap-2.1.49-5.el9.noarch.rpm 1.2 MB/s | 33 kB 00:00 2026-03-20T11:44:40.817 INFO:teuthology.orchestra.run.vm00.stdout:(44/138): libquadmath-11.5.0-14.el9.x86_64.rpm 93 kB/s | 184 kB 00:01 2026-03-20T11:44:40.822 INFO:teuthology.orchestra.run.vm00.stdout:(45/138): pciutils-3.7.0-7.el9.x86_64.rpm 116 kB/s | 93 kB 00:00 2026-03-20T11:44:41.913 INFO:teuthology.orchestra.run.vm00.stdout:(46/138): python3-cffi-1.14.5-5.el9.x86_64.rpm 231 kB/s | 253 kB 00:01 2026-03-20T11:44:42.585 INFO:teuthology.orchestra.run.vm00.stdout:(47/138): python3-ply-3.11-14.el9.noarch.rpm 158 kB/s | 106 kB 00:00 2026-03-20T11:44:43.371 INFO:teuthology.orchestra.run.vm00.stdout:(48/138): python3-cryptography-36.0.1-5.el9.x86 500 kB/s | 1.2 MB 00:02 2026-03-20T11:44:44.006 INFO:teuthology.orchestra.run.vm00.stdout:(49/138): python3-pycparser-2.20-6.el9.noarch.r 95 kB/s | 135 kB 00:01 2026-03-20T11:44:44.113 INFO:teuthology.orchestra.run.vm00.stdout:(50/138): python3-pyparsing-2.4.7-9.el9.noarch. 203 kB/s | 150 kB 00:00 2026-03-20T11:44:44.427 INFO:teuthology.orchestra.run.vm00.stdout:(51/138): python3-requests-2.25.1-10.el9.noarch 300 kB/s | 126 kB 00:00 2026-03-20T11:44:44.613 INFO:teuthology.orchestra.run.vm00.stdout:(52/138): python3-urllib3-1.26.5-7.el9.noarch.r 436 kB/s | 218 kB 00:00 2026-03-20T11:44:45.046 INFO:teuthology.orchestra.run.vm00.stdout:(53/138): unzip-6.0-59.el9.x86_64.rpm 294 kB/s | 182 kB 00:00 2026-03-20T11:44:45.555 INFO:teuthology.orchestra.run.vm00.stdout:(54/138): zip-3.0-35.el9.x86_64.rpm 282 kB/s | 266 kB 00:00 2026-03-20T11:44:45.575 INFO:teuthology.orchestra.run.vm00.stdout:(55/138): boost-program-options-1.75.0-13.el9.x 197 kB/s | 104 kB 00:00 2026-03-20T11:44:45.758 INFO:teuthology.orchestra.run.vm00.stdout:(56/138): flexiblas-3.0.4-9.el9.x86_64.rpm 146 kB/s | 30 kB 00:00 2026-03-20T11:44:45.924 INFO:teuthology.orchestra.run.vm00.stdout:(57/138): flexiblas-openblas-openmp-3.0.4-9.el9 90 kB/s | 15 kB 00:00 2026-03-20T11:44:45.933 INFO:teuthology.orchestra.run.vm00.stdout:(58/138): flexiblas-netlib-3.0.4-9.el9.x86_64.r 8.4 MB/s | 3.0 MB 00:00 2026-03-20T11:44:46.092 INFO:teuthology.orchestra.run.vm00.stdout:(59/138): libnbd-1.20.3-4.el9.x86_64.rpm 974 kB/s | 164 kB 00:00 2026-03-20T11:44:46.146 INFO:teuthology.orchestra.run.vm00.stdout:(60/138): libpmemobj-1.12.1-1.el9.x86_64.rpm 752 kB/s | 160 kB 00:00 2026-03-20T11:44:46.194 INFO:teuthology.orchestra.run.vm00.stdout:(61/138): librabbitmq-0.11.0-7.el9.x86_64.rpm 447 kB/s | 45 kB 00:00 2026-03-20T11:44:46.226 INFO:teuthology.orchestra.run.vm00.stdout:(62/138): librdkafka-1.6.1-102.el9.x86_64.rpm 8.1 MB/s | 662 kB 00:00 2026-03-20T11:44:46.263 INFO:teuthology.orchestra.run.vm00.stdout:(63/138): libxslt-1.1.34-12.el9.x86_64.rpm 6.2 MB/s | 233 kB 00:00 2026-03-20T11:44:46.352 INFO:teuthology.orchestra.run.vm00.stdout:(64/138): lttng-ust-2.12.0-6.el9.x86_64.rpm 3.2 MB/s | 292 kB 00:00 2026-03-20T11:44:46.353 INFO:teuthology.orchestra.run.vm00.stdout:(65/138): libstoragemgmt-1.10.1-1.el9.x86_64.rp 1.5 MB/s | 246 kB 00:00 2026-03-20T11:44:46.472 INFO:teuthology.orchestra.run.vm00.stdout:(66/138): lua-5.4.4-4.el9.x86_64.rpm 1.5 MB/s | 188 kB 00:00 2026-03-20T11:44:46.770 INFO:teuthology.orchestra.run.vm00.stdout:(67/138): openblas-0.3.29-1.el9.x86_64.rpm 101 kB/s | 42 kB 00:00 2026-03-20T11:44:46.830 INFO:teuthology.orchestra.run.vm00.stdout:(68/138): perl-Benchmark-1.23-483.el9.noarch.rp 445 kB/s | 26 kB 00:00 2026-03-20T11:44:46.910 INFO:teuthology.orchestra.run.vm00.stdout:(69/138): perl-Test-Harness-3.42-461.el9.noarch 3.6 MB/s | 295 kB 00:00 2026-03-20T11:44:47.012 INFO:teuthology.orchestra.run.vm00.stdout:(70/138): openblas-openmp-0.3.29-1.el9.x86_64.r 9.8 MB/s | 5.3 MB 00:00 2026-03-20T11:44:47.103 INFO:teuthology.orchestra.run.vm00.stdout:(71/138): protobuf-3.14.0-17.el9.x86_64.rpm 5.2 MB/s | 1.0 MB 00:00 2026-03-20T11:44:47.250 INFO:teuthology.orchestra.run.vm00.stdout:(72/138): python3-devel-3.9.25-3.el9.x86_64.rpm 1.6 MB/s | 244 kB 00:00 2026-03-20T11:44:47.377 INFO:teuthology.orchestra.run.vm00.stdout:(73/138): python3-jinja2-2.11.3-8.el9.noarch.rp 1.9 MB/s | 249 kB 00:00 2026-03-20T11:44:47.444 INFO:teuthology.orchestra.run.vm00.stdout:(74/138): python3-jmespath-1.0.1-1.el9.noarch.r 712 kB/s | 48 kB 00:00 2026-03-20T11:44:47.519 INFO:teuthology.orchestra.run.vm00.stdout:(75/138): python3-libstoragemgmt-1.10.1-1.el9.x 2.3 MB/s | 177 kB 00:00 2026-03-20T11:44:47.552 INFO:teuthology.orchestra.run.vm00.stdout:(76/138): python3-babel-2.9.1-2.el9.noarch.rpm 11 MB/s | 6.0 MB 00:00 2026-03-20T11:44:47.648 INFO:teuthology.orchestra.run.vm00.stdout:(77/138): python3-markupsafe-1.1.1-12.el9.x86_6 269 kB/s | 35 kB 00:00 2026-03-20T11:44:47.771 INFO:teuthology.orchestra.run.vm00.stdout:(78/138): python3-numpy-f2py-1.23.5-2.el9.x86_6 3.5 MB/s | 442 kB 00:00 2026-03-20T11:44:47.854 INFO:teuthology.orchestra.run.vm00.stdout:(79/138): python3-packaging-20.9-5.el9.noarch.r 936 kB/s | 77 kB 00:00 2026-03-20T11:44:47.924 INFO:teuthology.orchestra.run.vm00.stdout:(80/138): python3-protobuf-3.14.0-17.el9.noarch 3.7 MB/s | 267 kB 00:00 2026-03-20T11:44:48.033 INFO:teuthology.orchestra.run.vm00.stdout:(81/138): python3-pyasn1-0.4.8-7.el9.noarch.rpm 1.4 MB/s | 157 kB 00:00 2026-03-20T11:44:48.085 INFO:teuthology.orchestra.run.vm00.stdout:(82/138): python3-numpy-1.23.5-2.el9.x86_64.rpm 11 MB/s | 6.1 MB 00:00 2026-03-20T11:44:48.153 INFO:teuthology.orchestra.run.vm00.stdout:(83/138): python3-pyasn1-modules-0.4.8-7.el9.no 2.3 MB/s | 277 kB 00:00 2026-03-20T11:44:48.253 INFO:teuthology.orchestra.run.vm00.stdout:(84/138): python3-requests-oauthlib-1.3.0-12.el 319 kB/s | 54 kB 00:00 2026-03-20T11:44:48.426 INFO:teuthology.orchestra.run.vm00.stdout:(85/138): python3-toml-0.10.2-6.el9.noarch.rpm 242 kB/s | 42 kB 00:00 2026-03-20T11:44:48.514 INFO:teuthology.orchestra.run.vm00.stdout:(86/138): qatlib-25.08.0-2.el9.x86_64.rpm 2.7 MB/s | 240 kB 00:00 2026-03-20T11:44:48.595 INFO:teuthology.orchestra.run.vm00.stdout:(87/138): qatlib-service-25.08.0-2.el9.x86_64.r 458 kB/s | 37 kB 00:00 2026-03-20T11:44:48.702 INFO:teuthology.orchestra.run.vm00.stdout:(88/138): qatzip-libs-1.3.1-1.el9.x86_64.rpm 622 kB/s | 66 kB 00:00 2026-03-20T11:44:48.760 INFO:teuthology.orchestra.run.vm00.stdout:(89/138): socat-1.7.4.1-8.el9.x86_64.rpm 5.1 MB/s | 303 kB 00:00 2026-03-20T11:44:49.273 INFO:teuthology.orchestra.run.vm00.stdout:(90/138): python3-scipy-1.9.3-2.el9.x86_64.rpm 17 MB/s | 19 MB 00:01 2026-03-20T11:44:49.313 INFO:teuthology.orchestra.run.vm00.stdout:(91/138): xmlstarlet-1.6.1-20.el9.x86_64.rpm 115 kB/s | 64 kB 00:00 2026-03-20T11:44:49.357 INFO:teuthology.orchestra.run.vm00.stdout:(92/138): lua-devel-5.4.4-4.el9.x86_64.rpm 265 kB/s | 22 kB 00:00 2026-03-20T11:44:49.512 INFO:teuthology.orchestra.run.vm00.stdout:(93/138): protobuf-compiler-3.14.0-17.el9.x86_6 4.2 MB/s | 862 kB 00:00 2026-03-20T11:44:49.875 INFO:teuthology.orchestra.run.vm00.stdout:(94/138): abseil-cpp-20211102.0-4.el9.x86_64.rp 1.0 MB/s | 551 kB 00:00 2026-03-20T11:44:49.912 INFO:teuthology.orchestra.run.vm00.stdout:(95/138): grpc-data-1.46.7-10.el9.noarch.rpm 520 kB/s | 19 kB 00:00 2026-03-20T11:44:49.921 INFO:teuthology.orchestra.run.vm00.stdout:(96/138): gperftools-libs-2.9.1-3.el9.x86_64.rp 752 kB/s | 308 kB 00:00 2026-03-20T11:44:50.039 INFO:teuthology.orchestra.run.vm00.stdout:(97/138): libarrow-doc-9.0.0-15.el9.noarch.rpm 212 kB/s | 25 kB 00:00 2026-03-20T11:44:50.188 INFO:teuthology.orchestra.run.vm00.stdout:(98/138): liboath-2.6.12-1.el9.x86_64.rpm 329 kB/s | 49 kB 00:00 2026-03-20T11:44:50.268 INFO:teuthology.orchestra.run.vm00.stdout:(99/138): libunwind-1.6.2-1.el9.x86_64.rpm 843 kB/s | 67 kB 00:00 2026-03-20T11:44:50.307 INFO:teuthology.orchestra.run.vm00.stdout:(100/138): libarrow-9.0.0-15.el9.x86_64.rpm 11 MB/s | 4.4 MB 00:00 2026-03-20T11:44:50.465 INFO:teuthology.orchestra.run.vm00.stdout:(101/138): luarocks-3.9.2-5.el9.noarch.rpm 767 kB/s | 151 kB 00:00 2026-03-20T11:44:50.792 INFO:teuthology.orchestra.run.vm00.stdout:(102/138): parquet-libs-9.0.0-15.el9.x86_64.rpm 1.7 MB/s | 838 kB 00:00 2026-03-20T11:44:50.824 INFO:teuthology.orchestra.run.vm00.stdout:(103/138): python3-asyncssh-2.13.2-5.el9.noarch 1.5 MB/s | 548 kB 00:00 2026-03-20T11:44:50.838 INFO:teuthology.orchestra.run.vm00.stdout:(104/138): python3-autocommand-2.2.2-8.el9.noar 643 kB/s | 29 kB 00:00 2026-03-20T11:44:50.868 INFO:teuthology.orchestra.run.vm00.stdout:(105/138): python3-backports-tarfile-1.2.0-1.el 1.3 MB/s | 60 kB 00:00 2026-03-20T11:44:50.874 INFO:teuthology.orchestra.run.vm00.stdout:(106/138): python3-bcrypt-3.2.2-1.el9.x86_64.rp 1.2 MB/s | 43 kB 00:00 2026-03-20T11:44:50.915 INFO:teuthology.orchestra.run.vm00.stdout:(107/138): python3-certifi-2023.05.07-4.el9.noa 345 kB/s | 14 kB 00:00 2026-03-20T11:44:50.916 INFO:teuthology.orchestra.run.vm00.stdout:(108/138): python3-cachetools-4.2.4-1.el9.noarc 669 kB/s | 32 kB 00:00 2026-03-20T11:44:51.012 INFO:teuthology.orchestra.run.vm00.stdout:(109/138): python3-cheroot-10.0.1-4.el9.noarch. 1.8 MB/s | 173 kB 00:00 2026-03-20T11:44:51.026 INFO:teuthology.orchestra.run.vm00.stdout:(110/138): python3-cherrypy-18.6.1-2.el9.noarch 3.2 MB/s | 358 kB 00:00 2026-03-20T11:44:51.091 INFO:teuthology.orchestra.run.vm00.stdout:(111/138): python3-google-auth-2.45.0-1.el9.noa 3.2 MB/s | 254 kB 00:00 2026-03-20T11:44:51.175 INFO:teuthology.orchestra.run.vm00.stdout:(112/138): python3-grpcio-tools-1.46.7-10.el9.x 1.7 MB/s | 144 kB 00:00 2026-03-20T11:44:51.234 INFO:teuthology.orchestra.run.vm00.stdout:(113/138): python3-jaraco-8.2.1-3.el9.noarch.rp 181 kB/s | 11 kB 00:00 2026-03-20T11:44:51.286 INFO:teuthology.orchestra.run.vm00.stdout:(114/138): python3-grpcio-1.46.7-10.el9.x86_64. 7.8 MB/s | 2.0 MB 00:00 2026-03-20T11:44:51.313 INFO:teuthology.orchestra.run.vm00.stdout:(115/138): python3-jaraco-classes-3.2.1-5.el9.n 225 kB/s | 18 kB 00:00 2026-03-20T11:44:51.360 INFO:teuthology.orchestra.run.vm00.stdout:(116/138): python3-jaraco-collections-3.0.0-8.e 318 kB/s | 23 kB 00:00 2026-03-20T11:44:51.423 INFO:teuthology.orchestra.run.vm00.stdout:(117/138): python3-jaraco-context-6.0.1-3.el9.n 178 kB/s | 20 kB 00:00 2026-03-20T11:44:51.469 INFO:teuthology.orchestra.run.vm00.stdout:(118/138): python3-jaraco-text-4.0.0-2.el9.noar 575 kB/s | 26 kB 00:00 2026-03-20T11:44:51.512 INFO:teuthology.orchestra.run.vm00.stdout:(119/138): python3-jaraco-functools-3.5.0-2.el9 128 kB/s | 19 kB 00:00 2026-03-20T11:44:51.595 INFO:teuthology.orchestra.run.vm00.stdout:(120/138): python3-more-itertools-8.12.0-2.el9. 952 kB/s | 79 kB 00:00 2026-03-20T11:44:51.636 INFO:teuthology.orchestra.run.vm00.stdout:(121/138): python3-natsort-7.1.1-5.el9.noarch.r 1.4 MB/s | 58 kB 00:00 2026-03-20T11:44:51.679 INFO:teuthology.orchestra.run.vm00.stdout:(122/138): python3-portend-3.1.0-2.el9.noarch.r 388 kB/s | 16 kB 00:00 2026-03-20T11:44:51.723 INFO:teuthology.orchestra.run.vm00.stdout:(123/138): python3-kubernetes-26.1.0-3.el9.noar 4.0 MB/s | 1.0 MB 00:00 2026-03-20T11:44:51.734 INFO:teuthology.orchestra.run.vm00.stdout:(124/138): python3-pyOpenSSL-21.0.0-1.el9.noarc 1.6 MB/s | 90 kB 00:00 2026-03-20T11:44:51.773 INFO:teuthology.orchestra.run.vm00.stdout:(125/138): python3-repoze-lru-0.7-16.el9.noarch 610 kB/s | 31 kB 00:00 2026-03-20T11:44:51.821 INFO:teuthology.orchestra.run.vm00.stdout:(126/138): python3-routes-2.5.1-5.el9.noarch.rp 2.1 MB/s | 188 kB 00:00 2026-03-20T11:44:51.855 INFO:teuthology.orchestra.run.vm00.stdout:(127/138): python3-rsa-4.9-2.el9.noarch.rpm 721 kB/s | 59 kB 00:00 2026-03-20T11:44:51.878 INFO:teuthology.orchestra.run.vm00.stdout:(128/138): python3-tempora-5.0.0-2.el9.noarch.r 630 kB/s | 36 kB 00:00 2026-03-20T11:44:51.933 INFO:teuthology.orchestra.run.vm00.stdout:(129/138): python3-typing-extensions-4.15.0-1.e 1.1 MB/s | 86 kB 00:00 2026-03-20T11:44:52.041 INFO:teuthology.orchestra.run.vm00.stdout:(130/138): python3-websocket-client-1.2.3-2.el9 550 kB/s | 90 kB 00:00 2026-03-20T11:44:52.067 INFO:teuthology.orchestra.run.vm00.stdout:(131/138): python3-xmltodict-0.12.0-15.el9.noar 165 kB/s | 22 kB 00:00 2026-03-20T11:44:52.079 INFO:teuthology.orchestra.run.vm00.stdout:(132/138): python3-zc-lockfile-2.0-10.el9.noarc 527 kB/s | 20 kB 00:00 2026-03-20T11:44:52.194 INFO:teuthology.orchestra.run.vm00.stdout:(133/138): re2-20211101-20.el9.x86_64.rpm 1.5 MB/s | 191 kB 00:00 2026-03-20T11:44:52.261 INFO:teuthology.orchestra.run.vm00.stdout:(134/138): s3cmd-2.4.0-1.el9.noarch.rpm 1.1 MB/s | 206 kB 00:00 2026-03-20T11:44:52.487 INFO:teuthology.orchestra.run.vm00.stdout:(135/138): thrift-0.15.0-4.el9.x86_64.rpm 5.4 MB/s | 1.6 MB 00:00 2026-03-20T11:44:53.444 INFO:teuthology.orchestra.run.vm00.stdout:(136/138): ceph-test-20.2.0-712.g70f8415b.el9.x 4.2 MB/s | 84 MB 00:19 2026-03-20T11:44:53.480 INFO:teuthology.orchestra.run.vm00.stdout:(137/138): librbd1-20.2.0-712.g70f8415b.el9.x86 2.9 MB/s | 2.8 MB 00:00 2026-03-20T11:44:53.515 INFO:teuthology.orchestra.run.vm00.stdout:(138/138): librados2-20.2.0-712.g70f8415b.el9.x 2.8 MB/s | 3.5 MB 00:01 2026-03-20T11:44:53.518 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-03-20T11:44:53.518 INFO:teuthology.orchestra.run.vm00.stdout:Total 11 MB/s | 267 MB 00:25 2026-03-20T11:44:54.053 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-20T11:44:54.112 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-20T11:44:54.112 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-20T11:44:55.118 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-20T11:44:55.118 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-20T11:44:56.243 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-20T11:44:56.327 INFO:teuthology.orchestra.run.vm00.stdout: Installing : thrift-0.15.0-4.el9.x86_64 1/140 2026-03-20T11:44:56.362 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 2/140 2026-03-20T11:44:56.374 INFO:teuthology.orchestra.run.vm00.stdout: Installing : liboath-2.6.12-1.el9.x86_64 3/140 2026-03-20T11:44:56.543 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 4/140 2026-03-20T11:44:56.545 INFO:teuthology.orchestra.run.vm00.stdout: Upgrading : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T11:44:56.579 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 5/140 2026-03-20T11:44:56.588 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T11:44:56.592 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/140 2026-03-20T11:44:56.595 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/140 2026-03-20T11:44:56.598 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 9/140 2026-03-20T11:44:56.605 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 10/140 2026-03-20T11:44:56.809 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 11/140 2026-03-20T11:44:56.812 INFO:teuthology.orchestra.run.vm00.stdout: Upgrading : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T11:44:56.833 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T11:44:56.836 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T11:44:56.863 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 13/140 2026-03-20T11:44:56.865 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T11:44:56.882 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T11:44:56.922 INFO:teuthology.orchestra.run.vm00.stdout: Installing : re2-1:20211101-20.el9.x86_64 15/140 2026-03-20T11:44:56.948 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 16/140 2026-03-20T11:44:57.023 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/140 2026-03-20T11:44:57.042 INFO:teuthology.orchestra.run.vm00.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 18/140 2026-03-20T11:44:57.051 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lua-5.4.4-4.el9.x86_64 19/140 2026-03-20T11:44:57.061 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 20/140 2026-03-20T11:44:57.104 INFO:teuthology.orchestra.run.vm00.stdout: Installing : unzip-6.0-59.el9.x86_64 21/140 2026-03-20T11:44:57.142 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 22/140 2026-03-20T11:44:57.174 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 23/140 2026-03-20T11:44:57.194 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 24/140 2026-03-20T11:44:57.201 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 25/140 2026-03-20T11:44:57.249 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 26/140 2026-03-20T11:44:57.280 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 27/140 2026-03-20T11:44:57.303 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 28/140 2026-03-20T11:44:57.309 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T11:44:57.365 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 29/140 2026-03-20T11:44:57.368 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T11:44:57.390 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 30/140 2026-03-20T11:44:57.407 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 31/140 2026-03-20T11:44:57.416 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 32/140 2026-03-20T11:44:57.450 INFO:teuthology.orchestra.run.vm00.stdout: Installing : zip-3.0-35.el9.x86_64 33/140 2026-03-20T11:44:57.455 INFO:teuthology.orchestra.run.vm00.stdout: Installing : luarocks-3.9.2-5.el9.noarch 34/140 2026-03-20T11:44:57.464 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 35/140 2026-03-20T11:44:57.543 INFO:teuthology.orchestra.run.vm00.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 36/140 2026-03-20T11:44:57.562 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 37/140 2026-03-20T11:44:57.582 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rsa-4.9-2.el9.noarch 38/140 2026-03-20T11:44:57.588 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 39/140 2026-03-20T11:44:57.596 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 40/140 2026-03-20T11:44:57.604 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 41/140 2026-03-20T11:44:57.608 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 42/140 2026-03-20T11:44:57.628 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 43/140 2026-03-20T11:44:57.635 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 44/140 2026-03-20T11:44:57.642 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 45/140 2026-03-20T11:44:57.656 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 46/140 2026-03-20T11:44:57.669 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 47/140 2026-03-20T11:44:57.675 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 48/140 2026-03-20T11:44:57.685 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 49/140 2026-03-20T11:44:57.730 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 50/140 2026-03-20T11:44:58.115 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 51/140 2026-03-20T11:44:58.130 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 52/140 2026-03-20T11:44:58.135 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 53/140 2026-03-20T11:44:58.143 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 54/140 2026-03-20T11:44:58.146 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 55/140 2026-03-20T11:44:58.154 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 56/140 2026-03-20T11:44:58.157 INFO:teuthology.orchestra.run.vm00.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 57/140 2026-03-20T11:44:58.159 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 58/140 2026-03-20T11:44:58.194 INFO:teuthology.orchestra.run.vm00.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 59/140 2026-03-20T11:44:58.250 INFO:teuthology.orchestra.run.vm00.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 60/140 2026-03-20T11:44:58.263 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 61/140 2026-03-20T11:44:58.271 INFO:teuthology.orchestra.run.vm00.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 62/140 2026-03-20T11:44:58.276 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 63/140 2026-03-20T11:44:58.283 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 64/140 2026-03-20T11:44:58.289 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 65/140 2026-03-20T11:44:58.297 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 66/140 2026-03-20T11:44:58.303 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 67/140 2026-03-20T11:44:58.336 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 68/140 2026-03-20T11:44:58.352 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 69/140 2026-03-20T11:44:58.361 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 70/140 2026-03-20T11:44:58.371 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 71/140 2026-03-20T11:44:58.413 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 72/140 2026-03-20T11:44:58.688 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 73/140 2026-03-20T11:44:58.719 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 74/140 2026-03-20T11:44:58.724 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T11:44:58.729 INFO:teuthology.orchestra.run.vm00.stdout: Installing : perl-Benchmark-1.23-483.el9.noarch 76/140 2026-03-20T11:44:58.791 INFO:teuthology.orchestra.run.vm00.stdout: Installing : openblas-0.3.29-1.el9.x86_64 77/140 2026-03-20T11:44:58.794 INFO:teuthology.orchestra.run.vm00.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 78/140 2026-03-20T11:44:58.818 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 79/140 2026-03-20T11:44:59.211 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 80/140 2026-03-20T11:44:59.300 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 81/140 2026-03-20T11:45:00.100 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 82/140 2026-03-20T11:45:00.125 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 83/140 2026-03-20T11:45:00.131 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 84/140 2026-03-20T11:45:00.134 INFO:teuthology.orchestra.run.vm00.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 85/140 2026-03-20T11:45:00.142 INFO:teuthology.orchestra.run.vm00.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 86/140 2026-03-20T11:45:00.454 INFO:teuthology.orchestra.run.vm00.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 87/140 2026-03-20T11:45:00.456 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T11:45:00.478 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 88/140 2026-03-20T11:45:00.479 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 89/140 2026-03-20T11:45:01.716 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T11:45:01.721 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T11:45:01.741 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 90/140 2026-03-20T11:45:01.755 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 91/140 2026-03-20T11:45:01.764 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-packaging-20.9-5.el9.noarch 92/140 2026-03-20T11:45:01.805 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ply-3.11-14.el9.noarch 93/140 2026-03-20T11:45:01.826 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 94/140 2026-03-20T11:45:01.917 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 95/140 2026-03-20T11:45:01.931 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 96/140 2026-03-20T11:45:01.960 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 97/140 2026-03-20T11:45:01.997 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 98/140 2026-03-20T11:45:02.058 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 99/140 2026-03-20T11:45:02.067 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 100/140 2026-03-20T11:45:02.073 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 101/140 2026-03-20T11:45:02.079 INFO:teuthology.orchestra.run.vm00.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 102/140 2026-03-20T11:45:02.083 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 103/140 2026-03-20T11:45:02.085 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T11:45:02.103 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 104/140 2026-03-20T11:45:02.435 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 105/140 2026-03-20T11:45:02.441 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T11:45:02.481 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 106/140 2026-03-20T11:45:02.481 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-20T11:45:02.482 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-20T11:45:02.482 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:02.486 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T11:45:08.477 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 107/140 2026-03-20T11:45:08.477 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /sys 2026-03-20T11:45:08.477 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /proc 2026-03-20T11:45:08.477 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /mnt 2026-03-20T11:45:08.477 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /var/tmp 2026-03-20T11:45:08.477 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /home 2026-03-20T11:45:08.477 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /root 2026-03-20T11:45:08.477 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /tmp 2026-03-20T11:45:08.477 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:08.603 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T11:45:08.627 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 108/140 2026-03-20T11:45:08.628 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T11:45:08.628 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-20T11:45:08.628 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T11:45:08.628 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-20T11:45:08.628 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:08.867 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T11:45:08.891 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 109/140 2026-03-20T11:45:08.891 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T11:45:08.891 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-20T11:45:08.891 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T11:45:08.891 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-20T11:45:08.891 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:08.899 INFO:teuthology.orchestra.run.vm00.stdout: Installing : mailcap-2.1.49-5.el9.noarch 110/140 2026-03-20T11:45:08.902 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 111/140 2026-03-20T11:45:08.921 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T11:45:08.921 INFO:teuthology.orchestra.run.vm00.stdout:Creating group 'qat' with GID 994. 2026-03-20T11:45:08.921 INFO:teuthology.orchestra.run.vm00.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-20T11:45:08.921 INFO:teuthology.orchestra.run.vm00.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-20T11:45:08.921 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:08.933 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T11:45:08.967 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 112/140 2026-03-20T11:45:08.967 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-20T11:45:08.967 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:08.990 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 113/140 2026-03-20T11:45:09.018 INFO:teuthology.orchestra.run.vm00.stdout: Installing : fuse-2.9.9-17.el9.x86_64 114/140 2026-03-20T11:45:09.092 INFO:teuthology.orchestra.run.vm00.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 115/140 2026-03-20T11:45:09.097 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T11:45:09.114 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 116/140 2026-03-20T11:45:09.114 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T11:45:09.114 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-20T11:45:09.114 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:09.965 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T11:45:09.995 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 117/140 2026-03-20T11:45:09.995 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T11:45:09.995 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-20T11:45:09.995 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T11:45:09.995 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-20T11:45:09.995 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:10.147 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T11:45:10.151 INFO:teuthology.orchestra.run.vm00.stdout: Installing : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 118/140 2026-03-20T11:45:10.160 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 119/140 2026-03-20T11:45:10.187 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 120/140 2026-03-20T11:45:10.191 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T11:45:11.541 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 121/140 2026-03-20T11:45:11.551 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T11:45:12.095 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 122/140 2026-03-20T11:45:12.098 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T11:45:12.168 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 123/140 2026-03-20T11:45:12.220 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 124/140 2026-03-20T11:45:12.222 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T11:45:12.249 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 125/140 2026-03-20T11:45:12.249 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T11:45:12.249 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-20T11:45:12.249 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T11:45:12.249 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-20T11:45:12.249 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:12.266 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T11:45:12.277 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 126/140 2026-03-20T11:45:12.331 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 127/140 2026-03-20T11:45:13.564 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 128/140 2026-03-20T11:45:13.568 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T11:45:13.589 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 129/140 2026-03-20T11:45:13.589 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T11:45:13.589 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-20T11:45:13.589 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T11:45:13.589 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-20T11:45:13.589 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:13.602 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T11:45:13.624 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-immutable-object-cache-2:20.2.0-712.g70f841 130/140 2026-03-20T11:45:13.624 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T11:45:13.624 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-20T11:45:13.624 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:13.769 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T11:45:13.793 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 131/140 2026-03-20T11:45:13.793 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-20T11:45:13.793 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-20T11:45:13.793 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T11:45:13.793 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-20T11:45:13.793 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:18.230 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 132/140 2026-03-20T11:45:18.238 INFO:teuthology.orchestra.run.vm00.stdout: Installing : perl-Test-Harness-1:3.42-461.el9.noarch 133/140 2026-03-20T11:45:18.245 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 134/140 2026-03-20T11:45:18.257 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 135/140 2026-03-20T11:45:18.279 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 136/140 2026-03-20T11:45:18.288 INFO:teuthology.orchestra.run.vm00.stdout: Installing : s3cmd-2.4.0-1.el9.noarch 137/140 2026-03-20T11:45:18.291 INFO:teuthology.orchestra.run.vm00.stdout: Installing : bzip2-1.0.8-11.el9.x86_64 138/140 2026-03-20T11:45:18.292 INFO:teuthology.orchestra.run.vm00.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T11:45:18.308 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 139/140 2026-03-20T11:45:18.308 INFO:teuthology.orchestra.run.vm00.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-2:20.2.0-712.g70f8415b.el9.x86_64 1/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 3/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 4/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-immutable-object-cache-2:20.2.0-712.g70f841 5/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 6/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 7/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 8/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 9/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 10/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 11/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 12/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_6 13/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_ 14/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 15/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 16/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 17/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_ 18/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 19/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9 20/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x 21/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 22/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 23/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 24/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 25/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 26/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 27/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 28/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.e 29/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noar 30/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.no 31/140 2026-03-20T11:45:19.818 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8 32/140 2026-03-20T11:45:19.821 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9 33/140 2026-03-20T11:45:19.821 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 34/140 2026-03-20T11:45:19.821 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el 35/140 2026-03-20T11:45:19.821 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 36/140 2026-03-20T11:45:19.821 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cephadm-2:20.2.0-712.g70f8415b.el9.noarch 37/140 2026-03-20T11:45:19.821 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : bzip2-1.0.8-11.el9.x86_64 38/140 2026-03-20T11:45:19.821 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 39/140 2026-03-20T11:45:19.821 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : fuse-2.9.9-17.el9.x86_64 40/140 2026-03-20T11:45:19.821 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 41/140 2026-03-20T11:45:19.821 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 42/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 43/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 44/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 45/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 46/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 47/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ply-3.11-14.el9.noarch 49/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 50/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 51/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 52/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 53/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : unzip-6.0-59.el9.x86_64 54/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : zip-3.0-35.el9.x86_64 55/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 56/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 57/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 58/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 59/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 60/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 61/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 62/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 63/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 64/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 65/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 66/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-5.4.4-4.el9.x86_64 67/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 68/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 69/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : perl-Benchmark-1.23-483.el9.noarch 70/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : perl-Test-Harness-1:3.42-461.el9.noarch 71/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 72/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 73/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 74/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 75/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 76/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 77/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 78/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 79/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 80/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 81/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 82/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 83/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 84/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 85/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 86/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 87/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 88/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 89/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 90/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 91/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 92/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 93/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 94/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 95/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 96/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 97/140 2026-03-20T11:45:19.822 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 98/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 99/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 100/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 101/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 102/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 103/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 104/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 105/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 106/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 107/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 108/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 109/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 110/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 111/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 112/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 113/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 114/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 115/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 116/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 117/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 118/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 119/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 120/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 121/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 122/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 123/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 124/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 125/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 126/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 127/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 128/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 129/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 130/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 131/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 132/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 133/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : re2-1:20211101-20.el9.x86_64 134/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : s3cmd-2.4.0-1.el9.noarch 135/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 136/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados2-2:20.2.0-712.g70f8415b.el9.x86_64 137/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 138/140 2026-03-20T11:45:19.823 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 139/140 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 140/140 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout:Upgraded: 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: librados2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: librbd1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout:Installed: 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: bzip2-1.0.8-11.el9.x86_64 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: ceph-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T11:45:19.935 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: cephadm-2:20.2.0-712.g70f8415b.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: fuse-2.9.9-17.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-proxy2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: librgw2-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: lua-5.4.4-4.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: perl-Benchmark-1.23-483.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: perl-Test-Harness-1:3.42-461.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-20T11:45:19.936 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply-3.11-14.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing-2.4.7-9.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd-2:20.2.0-712.g70f8415b.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: re2-1:20211101-20.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: s3cmd-2.4.0-1.el9.noarch 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-20T11:45:19.937 INFO:teuthology.orchestra.run.vm00.stdout: unzip-6.0-59.el9.x86_64 2026-03-20T11:45:19.938 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-20T11:45:19.938 INFO:teuthology.orchestra.run.vm00.stdout: zip-3.0-35.el9.x86_64 2026-03-20T11:45:19.938 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:19.938 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-20T11:45:20.038 DEBUG:teuthology.parallel:result is None 2026-03-20T11:45:20.038 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=70f8415b300f041766fa27faf7d5472699e32388 2026-03-20T11:45:20.664 DEBUG:teuthology.orchestra.run.vm00:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-20T11:45:20.685 INFO:teuthology.orchestra.run.vm00.stdout:20.2.0-712.g70f8415b.el9 2026-03-20T11:45:20.685 INFO:teuthology.packaging:The installed version of ceph is 20.2.0-712.g70f8415b.el9 2026-03-20T11:45:20.685 INFO:teuthology.task.install:The correct ceph version 20.2.0-712.g70f8415b is installed. 2026-03-20T11:45:20.686 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-20T11:45:20.686 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:20.686 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-20T11:45:20.753 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-20T11:45:20.753 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:20.753 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/daemon-helper 2026-03-20T11:45:20.816 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-20T11:45:20.880 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-20T11:45:20.881 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:20.881 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-20T11:45:20.944 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-20T11:45:21.008 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-20T11:45:21.008 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:21.008 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/stdin-killer 2026-03-20T11:45:21.073 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-20T11:45:21.138 INFO:teuthology.run_tasks:Running task ceph... 2026-03-20T11:45:21.186 INFO:tasks.ceph:Making ceph log dir writeable by non-root... 2026-03-20T11:45:21.186 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 777 /var/log/ceph 2026-03-20T11:45:21.213 INFO:tasks.ceph:Disabling ceph logrotate... 2026-03-20T11:45:21.213 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/logrotate.d/ceph 2026-03-20T11:45:21.278 INFO:tasks.ceph:Creating extra log directories... 2026-03-20T11:45:21.278 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m0777 -- /var/log/ceph/valgrind /var/log/ceph/profiling-logger 2026-03-20T11:45:21.347 INFO:tasks.ceph:Creating ceph cluster ceph... 2026-03-20T11:45:21.347 INFO:tasks.ceph:config {'conf': {'client': {'debug ms': 1, 'debug rgw': 20, 'rgw enable static website': False}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'fs': 'xfs', 'mkfs_options': None, 'mount_options': None, 'skip_mgr_daemons': False, 'log_ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', '\\(POOL_APP_NOT_ENABLED\\)', 'not have an application enabled'], 'cpu_profile': set(), 'cluster': 'ceph', 'mon_bind_msgr2': True, 'mon_bind_addrvec': True} 2026-03-20T11:45:21.347 INFO:tasks.ceph:ctx.config {'archive_path': '/archive/kyr-2026-03-20_10:58:43-rgw-tentacle-none-default-vps/2075', 'branch': 'tentacle', 'description': 'rgw/tools/{centos_latest cluster ignore-pg-availability tasks}', 'email': None, 'first_in_suite': False, 'flavor': 'default', 'job_id': '2075', 'last_in_suite': False, 'machine_type': 'vps', 'name': 'kyr-2026-03-20_10:58:43-rgw-tentacle-none-default-vps', 'no_nested_subset': False, 'openstack': [{'volumes': {'count': 1, 'size': 10}}], 'os_type': 'centos', 'os_version': '9.stream', 'overrides': {'admin_socket': {'branch': 'tentacle'}, 'ansible.cephlab': {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'logical_volumes': {'lv_1': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_2': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_3': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}, 'lv_4': {'scratch_dev': True, 'size': '25%VG', 'vg': 'vg_nvme'}}, 'timezone': 'UTC', 'volume_groups': {'vg_nvme': {'pvs': '/dev/vdb,/dev/vdc,/dev/vdd,/dev/vde'}}}}, 'ceph': {'conf': {'client': {'debug ms': 1, 'debug rgw': 20, 'rgw enable static website': False}, 'mgr': {'debug mgr': 20, 'debug ms': 1}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', '\\(PG_AVAILABILITY\\)', '\\(PG_DEGRADED\\)', '\\(POOL_APP_NOT_ENABLED\\)', 'not have an application enabled'], 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'ceph-deploy': {'conf': {'client': {'log file': '/var/log/ceph/ceph-$name.$pid.log'}, 'mon': {}}}, 'cephadm': {'cephadm_binary_url': 'https://download.ceph.com/rpm-20.2.0/el9/noarch/cephadm'}, 'install': {'ceph': {'flavor': 'default', 'sha1': '70f8415b300f041766fa27faf7d5472699e32388'}, 'extra_system_packages': {'deb': ['python3-jmespath', 'python3-xmltodict', 's3cmd'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-jmespath', 'python3-xmltodict', 's3cmd']}}, 'rgw': {'frontend': 'beast'}, 'workunit': {'branch': 'tt-tentacle', 'sha1': '7b4fb1902b22ff4ea3f4ff6a953bc42198ebeffe'}}, 'owner': 'kyr', 'priority': 1000, 'repo': 'https://github.com/ceph/ceph.git', 'roles': [['mon.a', 'osd.0', 'osd.1', 'osd.2', 'mgr.0', 'client.0']], 'seed': 7702, 'sha1': '70f8415b300f041766fa27faf7d5472699e32388', 'sleep_before_teardown': 0, 'suite': 'rgw', 'suite_branch': 'tt-tentacle', 'suite_path': '/home/teuthos/src/github.com_kshtsk_ceph_7b4fb1902b22ff4ea3f4ff6a953bc42198ebeffe/qa', 'suite_relpath': 'qa', 'suite_repo': 'https://github.com/kshtsk/ceph.git', 'suite_sha1': '7b4fb1902b22ff4ea3f4ff6a953bc42198ebeffe', 'targets': {'vm00.local': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKw9avWVk91afIbXkwyFOaonigzL3YxO5+mPEVDub9AWHO0sZOEv79VavLWGHxVnTUaem9r0phN/JMfoPxaloTs='}, 'tasks': [{'internal.check_packages': None}, {'internal.buildpackages_prep': None}, {'internal.save_config': None}, {'internal.check_lock': None}, {'internal.add_remotes': None}, {'console_log': None}, {'internal.connect': None}, {'internal.push_inventory': None}, {'internal.serialize_remote_roles': None}, {'internal.check_conflict': None}, {'internal.check_ceph_data': None}, {'internal.vm_setup': None}, {'internal.base': None}, {'internal.archive_upload': None}, {'internal.archive': None}, {'internal.coredump': None}, {'internal.sudo': None}, {'internal.syslog': None}, {'internal.timer': None}, {'pcp': None}, {'selinux': None}, {'ansible.cephlab': None}, {'clock': None}, {'install': None}, {'ceph': None}, {'rgw': {'client.0': {'dns-name': ''}}}, {'workunit': {'clients': {'client.0': ['rgw/test_rgw_orphan_list.sh']}}}], 'teuthology': {'fragments_dropped': [], 'meta': {}, 'postmerge': []}, 'teuthology_branch': 'clyso-debian-13', 'teuthology_repo': 'https://github.com/clyso/teuthology', 'teuthology_sha1': '1c580df7a9c7c2aadc272da296344fd99f27c444', 'timestamp': '2026-03-20_10:58:43', 'tube': 'vps', 'user': 'kyr', 'verbose': False, 'worker_log': '/home/teuthos/.teuthology/dispatcher/dispatcher.vps.4188345'} 2026-03-20T11:45:21.348 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/ceph.data 2026-03-20T11:45:21.400 DEBUG:teuthology.orchestra.run.vm00:> sudo install -d -m0777 -- /var/run/ceph 2026-03-20T11:45:21.467 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:21.467 DEBUG:teuthology.orchestra.run.vm00:> dd if=/scratch_devs of=/dev/stdout 2026-03-20T11:45:21.525 DEBUG:teuthology.misc:devs=['/dev/vg_nvme/lv_1', '/dev/vg_nvme/lv_2', '/dev/vg_nvme/lv_3', '/dev/vg_nvme/lv_4'] 2026-03-20T11:45:21.525 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_1 2026-03-20T11:45:21.585 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_1 -> ../dm-0 2026-03-20T11:45:21.585 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T11:45:21.585 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 629 Links: 1 2026-03-20T11:45:21.585 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T11:45:21.585 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T11:45:21.585 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-20 11:45:18.640894882 +0000 2026-03-20T11:45:21.585 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-20 11:44:00.239476943 +0000 2026-03-20T11:45:21.585 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-20 11:44:00.239476943 +0000 2026-03-20T11:45:21.585 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-20 11:44:00.239476943 +0000 2026-03-20T11:45:21.585 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_1 of=/dev/null count=1 2026-03-20T11:45:21.650 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-20T11:45:21.650 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-20T11:45:21.650 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000208351 s, 2.5 MB/s 2026-03-20T11:45:21.651 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_1 2026-03-20T11:45:21.709 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_2 2026-03-20T11:45:21.768 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_2 -> ../dm-1 2026-03-20T11:45:21.768 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T11:45:21.768 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 694 Links: 1 2026-03-20T11:45:21.768 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T11:45:21.768 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T11:45:21.768 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-20 11:45:18.640894882 +0000 2026-03-20T11:45:21.768 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-20 11:44:00.463477291 +0000 2026-03-20T11:45:21.768 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-20 11:44:00.463477291 +0000 2026-03-20T11:45:21.768 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-20 11:44:00.463477291 +0000 2026-03-20T11:45:21.768 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_2 of=/dev/null count=1 2026-03-20T11:45:21.832 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-20T11:45:21.832 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-20T11:45:21.832 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000193673 s, 2.6 MB/s 2026-03-20T11:45:21.833 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_2 2026-03-20T11:45:21.889 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_3 2026-03-20T11:45:21.947 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_3 -> ../dm-2 2026-03-20T11:45:21.947 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T11:45:21.947 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 726 Links: 1 2026-03-20T11:45:21.947 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T11:45:21.947 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T11:45:21.947 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-20 11:45:18.641894884 +0000 2026-03-20T11:45:21.947 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-20 11:44:00.669477611 +0000 2026-03-20T11:45:21.947 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-20 11:44:00.669477611 +0000 2026-03-20T11:45:21.947 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-20 11:44:00.669477611 +0000 2026-03-20T11:45:21.947 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_3 of=/dev/null count=1 2026-03-20T11:45:22.012 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-20T11:45:22.012 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-20T11:45:22.012 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000187601 s, 2.7 MB/s 2026-03-20T11:45:22.013 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_3 2026-03-20T11:45:22.070 DEBUG:teuthology.orchestra.run.vm00:> stat /dev/vg_nvme/lv_4 2026-03-20T11:45:22.127 INFO:teuthology.orchestra.run.vm00.stdout: File: /dev/vg_nvme/lv_4 -> ../dm-3 2026-03-20T11:45:22.127 INFO:teuthology.orchestra.run.vm00.stdout: Size: 7 Blocks: 0 IO Block: 4096 symbolic link 2026-03-20T11:45:22.127 INFO:teuthology.orchestra.run.vm00.stdout:Device: 6h/6d Inode: 769 Links: 1 2026-03-20T11:45:22.127 INFO:teuthology.orchestra.run.vm00.stdout:Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) 2026-03-20T11:45:22.127 INFO:teuthology.orchestra.run.vm00.stdout:Context: system_u:object_r:device_t:s0 2026-03-20T11:45:22.127 INFO:teuthology.orchestra.run.vm00.stdout:Access: 2026-03-20 11:45:18.641894884 +0000 2026-03-20T11:45:22.127 INFO:teuthology.orchestra.run.vm00.stdout:Modify: 2026-03-20 11:44:00.908477981 +0000 2026-03-20T11:45:22.127 INFO:teuthology.orchestra.run.vm00.stdout:Change: 2026-03-20 11:44:00.908477981 +0000 2026-03-20T11:45:22.127 INFO:teuthology.orchestra.run.vm00.stdout: Birth: 2026-03-20 11:44:00.908477981 +0000 2026-03-20T11:45:22.127 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/dev/vg_nvme/lv_4 of=/dev/null count=1 2026-03-20T11:45:22.192 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records in 2026-03-20T11:45:22.192 INFO:teuthology.orchestra.run.vm00.stderr:1+0 records out 2026-03-20T11:45:22.192 INFO:teuthology.orchestra.run.vm00.stderr:512 bytes copied, 0.000194203 s, 2.6 MB/s 2026-03-20T11:45:22.193 DEBUG:teuthology.orchestra.run.vm00:> ! mount | grep -v devtmpfs | grep -q /dev/vg_nvme/lv_4 2026-03-20T11:45:22.248 INFO:tasks.ceph:osd dev map: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3'} 2026-03-20T11:45:22.249 INFO:tasks.ceph:remote_to_roles_to_devs: {Remote(name='ubuntu@vm00.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3'}} 2026-03-20T11:45:22.249 INFO:tasks.ceph:Generating config... 2026-03-20T11:45:22.249 INFO:tasks.ceph:[client] debug ms = 1 2026-03-20T11:45:22.249 INFO:tasks.ceph:[client] debug rgw = 20 2026-03-20T11:45:22.249 INFO:tasks.ceph:[client] rgw enable static website = False 2026-03-20T11:45:22.249 INFO:tasks.ceph:[mgr] debug mgr = 20 2026-03-20T11:45:22.249 INFO:tasks.ceph:[mgr] debug ms = 1 2026-03-20T11:45:22.249 INFO:tasks.ceph:[mon] debug mon = 20 2026-03-20T11:45:22.249 INFO:tasks.ceph:[mon] debug ms = 1 2026-03-20T11:45:22.249 INFO:tasks.ceph:[mon] debug paxos = 20 2026-03-20T11:45:22.249 INFO:tasks.ceph:[osd] debug ms = 1 2026-03-20T11:45:22.249 INFO:tasks.ceph:[osd] debug osd = 20 2026-03-20T11:45:22.249 INFO:tasks.ceph:[osd] osd mclock iops capacity threshold hdd = 49000 2026-03-20T11:45:22.249 INFO:tasks.ceph:Setting up mon.a... 2026-03-20T11:45:22.249 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring /etc/ceph/ceph.keyring 2026-03-20T11:45:22.326 INFO:teuthology.orchestra.run.vm00.stdout:creating /etc/ceph/ceph.keyring 2026-03-20T11:45:22.329 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=mon. /etc/ceph/ceph.keyring 2026-03-20T11:45:22.407 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-20T11:45:22.471 DEBUG:tasks.ceph:Ceph mon addresses: [('mon.a', '192.168.123.100')] 2026-03-20T11:45:22.471 DEBUG:tasks.ceph:writing out conf {'global': {'chdir': '', 'pid file': '/var/run/ceph/$cluster-$name.pid', 'auth supported': 'cephx', 'filestore xattr use omap': 'true', 'mon clock drift allowed': '1.000', 'osd crush chooseleaf type': '0', 'auth debug': 'true', 'ms die on old message': 'true', 'ms die on bug': 'true', 'mon max pg per osd': '10000', 'mon pg warn max object skew': '0', 'osd_pool_default_pg_autoscale_mode': 'off', 'osd pool default size': '2', 'mon osd allow primary affinity': 'true', 'mon osd allow pg remap': 'true', 'mon warn on legacy crush tunables': 'false', 'mon warn on crush straw calc version zero': 'false', 'mon warn on no sortbitwise': 'false', 'mon warn on osd down out interval zero': 'false', 'mon warn on too few osds': 'false', 'mon_warn_on_pool_pg_num_not_power_of_two': 'false', 'mon_warn_on_pool_no_redundancy': 'false', 'mon_allow_pool_size_one': 'true', 'osd pool default erasure code profile': 'plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd', 'osd default data pool replay window': '5', 'mon allow pool delete': 'true', 'mon cluster log file level': 'debug', 'debug asserts on shutdown': 'true', 'mon health detail to clog': 'false', 'mon host': '192.168.123.100'}, 'osd': {'osd journal size': '100', 'osd scrub load threshold': '5.0', 'osd scrub max interval': '600', 'osd mclock profile': 'high_recovery_ops', 'osd mclock skip benchmark': 'true', 'osd recover clone overlap': 'true', 'osd recovery max chunk': '1048576', 'osd debug shutdown': 'true', 'osd debug op order': 'true', 'osd debug verify stray on activate': 'true', 'osd debug trim objects': 'true', 'osd open classes on start': 'true', 'osd debug pg log writeout': 'true', 'osd deep scrub update digest min age': '30', 'osd map max advance': '10', 'journal zero on create': 'true', 'filestore ondisk finisher threads': '3', 'filestore apply finisher threads': '3', 'bdev debug aio': 'true', 'osd debug misdirected ops': 'true', 'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}, 'mgr': {'debug ms': 1, 'debug mgr': 20, 'debug mon': '20', 'debug auth': '20', 'mon reweight min pgs per osd': '4', 'mon reweight min bytes per osd': '10', 'mgr/telemetry/nag': 'false'}, 'mon': {'debug ms': 1, 'debug mon': 20, 'debug paxos': 20, 'debug auth': '20', 'mon data avail warn': '5', 'mon mgr mkfs grace': '240', 'mon reweight min pgs per osd': '4', 'mon osd reporter subtree level': 'osd', 'mon osd prime pg temp': 'true', 'mon reweight min bytes per osd': '10', 'auth mon ticket ttl': '660', 'auth service ticket ttl': '240', 'mon_warn_on_insecure_global_id_reclaim': 'false', 'mon_warn_on_insecure_global_id_reclaim_allowed': 'false', 'mon_down_mkfs_grace': '2m', 'mon_warn_on_filestore_osds': 'false'}, 'client': {'rgw cache enabled': 'true', 'rgw enable ops log': 'true', 'rgw enable usage log': 'true', 'log file': '/var/log/ceph/$cluster-$name.$pid.log', 'admin socket': '/var/run/ceph/$cluster-$name.$pid.asok', 'debug ms': 1, 'debug rgw': 20, 'rgw enable static website': False}, 'mon.a': {}} 2026-03-20T11:45:22.472 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:22.472 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/ceph.tmp.conf 2026-03-20T11:45:22.526 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage monmaptool -c /home/ubuntu/cephtest/ceph.tmp.conf --create --clobber --enable-all-features --add a 192.168.123.100 --print /home/ubuntu/cephtest/ceph.monmap 2026-03-20T11:45:22.601 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool: monmap file /home/ubuntu/cephtest/ceph.monmap 2026-03-20T11:45:22.601 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool: generated fsid d2998f34-0acb-4cf3-b295-d778019a8c29 2026-03-20T11:45:22.601 INFO:teuthology.orchestra.run.vm00.stdout:setting min_mon_release = tentacle 2026-03-20T11:45:22.601 INFO:teuthology.orchestra.run.vm00.stdout:epoch 0 2026-03-20T11:45:22.601 INFO:teuthology.orchestra.run.vm00.stdout:fsid d2998f34-0acb-4cf3-b295-d778019a8c29 2026-03-20T11:45:22.601 INFO:teuthology.orchestra.run.vm00.stdout:last_changed 2026-03-20T11:45:22.604691+0000 2026-03-20T11:45:22.601 INFO:teuthology.orchestra.run.vm00.stdout:created 2026-03-20T11:45:22.604691+0000 2026-03-20T11:45:22.601 INFO:teuthology.orchestra.run.vm00.stdout:min_mon_release 20 (tentacle) 2026-03-20T11:45:22.601 INFO:teuthology.orchestra.run.vm00.stdout:election_strategy: 1 2026-03-20T11:45:22.601 INFO:teuthology.orchestra.run.vm00.stdout:0: [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] mon.a 2026-03-20T11:45:22.601 INFO:teuthology.orchestra.run.vm00.stdout:monmaptool: writing epoch 0 to /home/ubuntu/cephtest/ceph.monmap (1 monitors) 2026-03-20T11:45:22.603 DEBUG:teuthology.orchestra.run.vm00:> rm -- /home/ubuntu/cephtest/ceph.tmp.conf 2026-03-20T11:45:22.659 INFO:tasks.ceph:Writing /etc/ceph/ceph.conf for FSID d2998f34-0acb-4cf3-b295-d778019a8c29... 2026-03-20T11:45:22.659 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /etc/ceph && sudo chmod 0755 /etc/ceph && sudo tee /etc/ceph/ceph.conf && sudo chmod 0644 /etc/ceph/ceph.conf > /dev/null 2026-03-20T11:45:22.741 INFO:teuthology.orchestra.run.vm00.stdout:[global] 2026-03-20T11:45:22.741 INFO:teuthology.orchestra.run.vm00.stdout: chdir = "" 2026-03-20T11:45:22.741 INFO:teuthology.orchestra.run.vm00.stdout: pid file = /var/run/ceph/$cluster-$name.pid 2026-03-20T11:45:22.741 INFO:teuthology.orchestra.run.vm00.stdout: auth supported = cephx 2026-03-20T11:45:22.741 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.741 INFO:teuthology.orchestra.run.vm00.stdout: filestore xattr use omap = true 2026-03-20T11:45:22.741 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.741 INFO:teuthology.orchestra.run.vm00.stdout: mon clock drift allowed = 1.000 2026-03-20T11:45:22.741 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.741 INFO:teuthology.orchestra.run.vm00.stdout: osd crush chooseleaf type = 0 2026-03-20T11:45:22.741 INFO:teuthology.orchestra.run.vm00.stdout: auth debug = true 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: ms die on old message = true 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: ms die on bug = true 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon max pg per osd = 10000 # >= luminous 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon pg warn max object skew = 0 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: # disable pg_autoscaler by default for new pools 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: osd_pool_default_pg_autoscale_mode = off 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: osd pool default size = 2 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon osd allow primary affinity = true 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon osd allow pg remap = true 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on legacy crush tunables = false 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on crush straw calc version zero = false 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on no sortbitwise = false 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on osd down out interval zero = false 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon warn on too few osds = false 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_pool_pg_num_not_power_of_two = false 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_pool_no_redundancy = false 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon_allow_pool_size_one = true 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: osd pool default erasure code profile = plugin=isa technique=reed_sol_van k=2 m=1 crush-failure-domain=osd 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: osd default data pool replay window = 5 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon allow pool delete = true 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon cluster log file level = debug 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: debug asserts on shutdown = true 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon health detail to clog = false 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: mon host = 192.168.123.100 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: fsid = d2998f34-0acb-4cf3-b295-d778019a8c29 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout:[osd] 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: osd journal size = 100 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: osd scrub load threshold = 5.0 2026-03-20T11:45:22.742 INFO:teuthology.orchestra.run.vm00.stdout: osd scrub max interval = 600 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd mclock profile = high_recovery_ops 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd mclock skip benchmark = true 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd recover clone overlap = true 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd recovery max chunk = 1048576 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd debug shutdown = true 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd debug op order = true 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd debug verify stray on activate = true 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd debug trim objects = true 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd open classes on start = true 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd debug pg log writeout = true 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd deep scrub update digest min age = 30 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd map max advance = 10 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: journal zero on create = true 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: filestore ondisk finisher threads = 3 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: filestore apply finisher threads = 3 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: bdev debug aio = true 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd debug misdirected ops = true 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: debug ms = 1 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: debug osd = 20 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: osd mclock iops capacity threshold hdd = 49000 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout:[mgr] 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: debug ms = 1 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: debug mgr = 20 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: debug mon = 20 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: debug auth = 20 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min pgs per osd = 4 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min bytes per osd = 10 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: mgr/telemetry/nag = false 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout:[mon] 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: debug ms = 1 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: debug mon = 20 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: debug paxos = 20 2026-03-20T11:45:22.743 INFO:teuthology.orchestra.run.vm00.stdout: debug auth = 20 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: mon data avail warn = 5 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: mon mgr mkfs grace = 240 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min pgs per osd = 4 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: mon osd reporter subtree level = osd 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: mon osd prime pg temp = true 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: mon reweight min bytes per osd = 10 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: # rotate auth tickets quickly to exercise renewal paths 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: auth mon ticket ttl = 660 # 11m 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: auth service ticket ttl = 240 # 4m 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: # don't complain about insecure global_id in the test suite 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_insecure_global_id_reclaim = false 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_insecure_global_id_reclaim_allowed = false 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: # 1m isn't quite enough 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: mon_down_mkfs_grace = 2m 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: mon_warn_on_filestore_osds = false 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout:[client] 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: rgw cache enabled = true 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: rgw enable ops log = true 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: rgw enable usage log = true 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: log file = /var/log/ceph/$cluster-$name.$pid.log 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: admin socket = /var/run/ceph/$cluster-$name.$pid.asok 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: debug ms = 1 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: debug rgw = 20 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout: rgw enable static website = False 2026-03-20T11:45:22.744 INFO:teuthology.orchestra.run.vm00.stdout:[mon.a] 2026-03-20T11:45:22.752 INFO:tasks.ceph:Creating admin key on mon.a... 2026-03-20T11:45:22.753 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /etc/ceph/ceph.keyring 2026-03-20T11:45:22.840 INFO:tasks.ceph:Copying monmap to all nodes... 2026-03-20T11:45:22.840 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:22.840 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.keyring of=/dev/stdout 2026-03-20T11:45:22.857 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:22.857 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/ceph.monmap of=/dev/stdout 2026-03-20T11:45:22.914 INFO:tasks.ceph:Sending monmap to node ubuntu@vm00.local 2026-03-20T11:45:22.914 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:22.914 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/ceph/ceph.keyring 2026-03-20T11:45:22.914 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod 0644 /etc/ceph/ceph.keyring 2026-03-20T11:45:22.990 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:22.990 DEBUG:teuthology.orchestra.run.vm00:> dd of=/home/ubuntu/cephtest/ceph.monmap 2026-03-20T11:45:23.044 INFO:tasks.ceph:Setting up mon nodes... 2026-03-20T11:45:23.044 INFO:tasks.ceph:Setting up mgr nodes... 2026-03-20T11:45:23.045 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/mgr/ceph-0 && sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=mgr.0 /var/lib/ceph/mgr/ceph-0/keyring 2026-03-20T11:45:23.129 INFO:teuthology.orchestra.run.vm00.stdout:creating /var/lib/ceph/mgr/ceph-0/keyring 2026-03-20T11:45:23.132 INFO:tasks.ceph:Setting up mds nodes... 2026-03-20T11:45:23.132 INFO:tasks.ceph_client:Setting up client nodes... 2026-03-20T11:45:23.132 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool --create-keyring --gen-key --name=client.0 /etc/ceph/ceph.client.0.keyring && sudo chmod 0644 /etc/ceph/ceph.client.0.keyring 2026-03-20T11:45:23.170 INFO:teuthology.orchestra.run.vm00.stdout:creating /etc/ceph/ceph.client.0.keyring 2026-03-20T11:45:23.182 INFO:tasks.ceph:Running mkfs on osd nodes... 2026-03-20T11:45:23.182 INFO:tasks.ceph:ctx.disk_config.remote_to_roles_to_dev: {Remote(name='ubuntu@vm00.local'): {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3'}} 2026-03-20T11:45:23.183 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-0 2026-03-20T11:45:23.248 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3'} 2026-03-20T11:45:23.248 INFO:tasks.ceph:role: osd.0 2026-03-20T11:45:23.248 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_1 on ubuntu@vm00.local 2026-03-20T11:45:23.249 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_1 2026-03-20T11:45:23.315 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_1 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T11:45:23.315 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T11:45:23.315 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T11:45:23.315 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T11:45:23.315 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T11:45:23.315 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-03-20T11:45:23.315 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T11:45:23.315 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T11:45:23.315 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T11:45:23.315 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T11:45:23.319 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-03-20T11:45:23.321 INFO:tasks.ceph:mount /dev/vg_nvme/lv_1 on ubuntu@vm00.local -o noatime 2026-03-20T11:45:23.321 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_1 /var/lib/ceph/osd/ceph-0 2026-03-20T11:45:23.397 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-0 2026-03-20T11:45:23.467 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-1 2026-03-20T11:45:23.534 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3'} 2026-03-20T11:45:23.534 INFO:tasks.ceph:role: osd.1 2026-03-20T11:45:23.534 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_2 on ubuntu@vm00.local 2026-03-20T11:45:23.534 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_2 2026-03-20T11:45:23.602 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_2 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T11:45:23.602 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T11:45:23.602 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T11:45:23.602 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T11:45:23.602 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T11:45:23.602 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-03-20T11:45:23.602 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T11:45:23.602 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T11:45:23.602 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T11:45:23.602 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T11:45:23.607 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-03-20T11:45:23.610 INFO:tasks.ceph:mount /dev/vg_nvme/lv_2 on ubuntu@vm00.local -o noatime 2026-03-20T11:45:23.610 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_2 /var/lib/ceph/osd/ceph-1 2026-03-20T11:45:23.678 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-1 2026-03-20T11:45:23.745 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/osd/ceph-2 2026-03-20T11:45:23.808 INFO:tasks.ceph:roles_to_devs: {'osd.0': '/dev/vg_nvme/lv_1', 'osd.1': '/dev/vg_nvme/lv_2', 'osd.2': '/dev/vg_nvme/lv_3'} 2026-03-20T11:45:23.808 INFO:tasks.ceph:role: osd.2 2026-03-20T11:45:23.808 INFO:tasks.ceph:['mkfs.xfs', '-f', '-i', 'size=2048'] on /dev/vg_nvme/lv_3 on ubuntu@vm00.local 2026-03-20T11:45:23.808 DEBUG:teuthology.orchestra.run.vm00:> yes | sudo mkfs.xfs -f -i size=2048 /dev/vg_nvme/lv_3 2026-03-20T11:45:23.873 INFO:teuthology.orchestra.run.vm00.stdout:meta-data=/dev/vg_nvme/lv_3 isize=2048 agcount=4, agsize=1310464 blks 2026-03-20T11:45:23.873 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 attr=2, projid32bit=1 2026-03-20T11:45:23.873 INFO:teuthology.orchestra.run.vm00.stdout: = crc=1 finobt=1, sparse=1, rmapbt=0 2026-03-20T11:45:23.873 INFO:teuthology.orchestra.run.vm00.stdout: = reflink=1 bigtime=1 inobtcount=1 nrext64=0 2026-03-20T11:45:23.873 INFO:teuthology.orchestra.run.vm00.stdout:data = bsize=4096 blocks=5241856, imaxpct=25 2026-03-20T11:45:23.873 INFO:teuthology.orchestra.run.vm00.stdout: = sunit=0 swidth=0 blks 2026-03-20T11:45:23.873 INFO:teuthology.orchestra.run.vm00.stdout:naming =version 2 bsize=4096 ascii-ci=0, ftype=1 2026-03-20T11:45:23.873 INFO:teuthology.orchestra.run.vm00.stdout:log =internal log bsize=4096 blocks=16384, version=2 2026-03-20T11:45:23.873 INFO:teuthology.orchestra.run.vm00.stdout: = sectsz=512 sunit=0 blks, lazy-count=1 2026-03-20T11:45:23.874 INFO:teuthology.orchestra.run.vm00.stdout:realtime =none extsz=4096 blocks=0, rtextents=0 2026-03-20T11:45:23.878 INFO:teuthology.orchestra.run.vm00.stdout:Discarding blocks...Done. 2026-03-20T11:45:23.880 INFO:tasks.ceph:mount /dev/vg_nvme/lv_3 on ubuntu@vm00.local -o noatime 2026-03-20T11:45:23.880 DEBUG:teuthology.orchestra.run.vm00:> sudo mount -t xfs -o noatime /dev/vg_nvme/lv_3 /var/lib/ceph/osd/ceph-2 2026-03-20T11:45:23.948 DEBUG:teuthology.orchestra.run.vm00:> sudo /sbin/restorecon /var/lib/ceph/osd/ceph-2 2026-03-20T11:45:24.016 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 0 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T11:45:24.100 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:24.102+0000 7f62a8541900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-0/keyring: can't open /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory 2026-03-20T11:45:24.100 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:24.102+0000 7f62a8541900 -1 created new key in keyring /var/lib/ceph/osd/ceph-0/keyring 2026-03-20T11:45:24.100 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:24.102+0000 7f62a8541900 -1 bdev(0x555572627800 /var/lib/ceph/osd/ceph-0/block) open stat got: (1) Operation not permitted 2026-03-20T11:45:24.100 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:24.102+0000 7f62a8541900 -1 bluestore(/var/lib/ceph/osd/ceph-0) _read_fsid unparsable uuid 2026-03-20T11:45:24.698 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-0 2026-03-20T11:45:24.766 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 1 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T11:45:24.848 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:24.850+0000 7f58ea6c8900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-1/keyring: can't open /var/lib/ceph/osd/ceph-1/keyring: (2) No such file or directory 2026-03-20T11:45:24.849 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:24.851+0000 7f58ea6c8900 -1 created new key in keyring /var/lib/ceph/osd/ceph-1/keyring 2026-03-20T11:45:24.849 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:24.851+0000 7f58ea6c8900 -1 bdev(0x564c173dd800 /var/lib/ceph/osd/ceph-1/block) open stat got: (1) Operation not permitted 2026-03-20T11:45:24.849 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:24.851+0000 7f58ea6c8900 -1 bluestore(/var/lib/ceph/osd/ceph-1) _read_fsid unparsable uuid 2026-03-20T11:45:25.347 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 2026-03-20T11:45:25.413 DEBUG:teuthology.orchestra.run.vm00:> sudo MALLOC_CHECK_=3 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-osd --no-mon-config --cluster ceph --mkfs --mkkey -i 2 --monmap /home/ubuntu/cephtest/ceph.monmap 2026-03-20T11:45:25.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:25.496+0000 7ff40fab6900 -1 auth: error reading file: /var/lib/ceph/osd/ceph-2/keyring: can't open /var/lib/ceph/osd/ceph-2/keyring: (2) No such file or directory 2026-03-20T11:45:25.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:25.496+0000 7ff40fab6900 -1 created new key in keyring /var/lib/ceph/osd/ceph-2/keyring 2026-03-20T11:45:25.494 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:25.497+0000 7ff40fab6900 -1 bdev(0x55f3f06c5800 /var/lib/ceph/osd/ceph-2/block) open stat got: (1) Operation not permitted 2026-03-20T11:45:25.495 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:25.497+0000 7ff40fab6900 -1 bluestore(/var/lib/ceph/osd/ceph-2) _read_fsid unparsable uuid 2026-03-20T11:45:25.931 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/osd/ceph-2 2026-03-20T11:45:25.999 INFO:tasks.ceph:Reading keys from all nodes... 2026-03-20T11:45:25.999 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:25.999 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/mgr/ceph-0/keyring of=/dev/stdout 2026-03-20T11:45:26.063 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:26.063 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-0/keyring of=/dev/stdout 2026-03-20T11:45:26.129 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:26.130 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-1/keyring of=/dev/stdout 2026-03-20T11:45:26.195 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:26.195 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-2/keyring of=/dev/stdout 2026-03-20T11:45:26.263 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:26.263 DEBUG:teuthology.orchestra.run.vm00:> dd if=/etc/ceph/ceph.client.0.keyring of=/dev/stdout 2026-03-20T11:45:26.321 INFO:tasks.ceph:Adding keys to all mons... 2026-03-20T11:45:26.322 DEBUG:teuthology.orchestra.run.vm00:> sudo tee -a /etc/ceph/ceph.keyring 2026-03-20T11:45:26.384 INFO:teuthology.orchestra.run.vm00.stdout:[mgr.0] 2026-03-20T11:45:26.384 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBTM71pj/nsBxAA0qJk5/M3wWJSHYwaj0U7lQ== 2026-03-20T11:45:26.384 INFO:teuthology.orchestra.run.vm00.stdout:[osd.0] 2026-03-20T11:45:26.384 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBUM71pYp4wBhAAYmFthjas5IUtNW62hES0kg== 2026-03-20T11:45:26.384 INFO:teuthology.orchestra.run.vm00.stdout:[osd.1] 2026-03-20T11:45:26.384 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBUM71pJ9bLMhAA+eNG4WFBnpO3FeaOsvC/ig== 2026-03-20T11:45:26.384 INFO:teuthology.orchestra.run.vm00.stdout:[osd.2] 2026-03-20T11:45:26.384 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBVM71pJUmwHRAAeVy7jJzcWa9Xh6b9bE8h5w== 2026-03-20T11:45:26.384 INFO:teuthology.orchestra.run.vm00.stdout:[client.0] 2026-03-20T11:45:26.384 INFO:teuthology.orchestra.run.vm00.stdout: key = AQBTM71plLNTChAAWNER3RR1dJFPsePHkU8MZA== 2026-03-20T11:45:26.385 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=mgr.0 --cap mon 'allow profile mgr' --cap osd 'allow *' --cap mds 'allow *' 2026-03-20T11:45:26.471 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.0 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T11:45:26.513 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.1 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T11:45:26.599 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=osd.2 --cap mon 'allow profile osd' --cap mgr 'allow profile osd' --cap osd 'allow *' 2026-03-20T11:45:26.644 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-authtool /etc/ceph/ceph.keyring --name=client.0 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow 2026-03-20T11:45:26.692 INFO:tasks.ceph:Running mkfs on mon nodes... 2026-03-20T11:45:26.693 DEBUG:teuthology.orchestra.run.vm00:> sudo mkdir -p /var/lib/ceph/mon/ceph-a 2026-03-20T11:45:26.718 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph-mon --cluster ceph --mkfs -i a --monmap /home/ubuntu/cephtest/ceph.monmap --keyring /etc/ceph/ceph.keyring 2026-03-20T11:45:26.815 DEBUG:teuthology.orchestra.run.vm00:> sudo chown -R ceph:ceph /var/lib/ceph/mon/ceph-a 2026-03-20T11:45:26.842 DEBUG:teuthology.orchestra.run.vm00:> rm -- /home/ubuntu/cephtest/ceph.monmap 2026-03-20T11:45:26.898 INFO:tasks.ceph:Starting mon daemons in cluster ceph... 2026-03-20T11:45:26.898 INFO:tasks.ceph.mon.a:Restarting daemon 2026-03-20T11:45:26.898 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f --cluster ceph -i a 2026-03-20T11:45:26.940 INFO:tasks.ceph.mon.a:Started 2026-03-20T11:45:26.940 INFO:tasks.ceph:Starting mgr daemons in cluster ceph... 2026-03-20T11:45:26.941 INFO:tasks.ceph.mgr.0:Restarting daemon 2026-03-20T11:45:26.941 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mgr -f --cluster ceph -i 0 2026-03-20T11:45:26.942 INFO:tasks.ceph.mgr.0:Started 2026-03-20T11:45:26.942 DEBUG:tasks.ceph:set 0 configs 2026-03-20T11:45:26.942 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph config dump 2026-03-20T11:45:27.015 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.017+0000 7fc100433640 1 Processor -- start 2026-03-20T11:45:27.016 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.017+0000 7fc100433640 1 -- start start 2026-03-20T11:45:27.016 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.017+0000 7fc100433640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7fc0f8150bf0 0x7fc0f8170fd0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:27.016 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.017+0000 7fc100433640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7fc0f805b1f0 con 0x7fc0f8058d20 2026-03-20T11:45:27.016 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.017+0000 7fc100433640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7fc0f805abd0 con 0x7fc0f8150bf0 2026-03-20T11:45:27.016 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.018+0000 7fc0fe1a8640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7fc0f8058d20 0x7fc0f80590f0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:34666/0 (socket says 192.168.123.100:34666) 2026-03-20T11:45:27.016 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.018+0000 7fc0fe1a8640 1 -- 192.168.123.100:0/2451556144 learned_addr learned my addr 192.168.123.100:0/2451556144 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:27.016 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.018+0000 7fc0fd9a7640 1 --2- 192.168.123.100:0/2451556144 >> v2:192.168.123.100:3300/0 conn(0x7fc0f8150bf0 0x7fc0f8170fd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:27.016 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.018+0000 7fc0fd9a7640 1 -- 192.168.123.100:0/2451556144 >> v2:192.168.123.100:3300/0 conn(0x7fc0f8150bf0 msgr2=0x7fc0f8170fd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_bulk peer close file descriptor 14 2026-03-20T11:45:27.016 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.018+0000 7fc0fd9a7640 1 -- 192.168.123.100:0/2451556144 >> v2:192.168.123.100:3300/0 conn(0x7fc0f8150bf0 msgr2=0x7fc0f8170fd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_until read failed 2026-03-20T11:45:27.016 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.018+0000 7fc0fd9a7640 1 --2- 192.168.123.100:0/2451556144 >> v2:192.168.123.100:3300/0 conn(0x7fc0f8150bf0 0x7fc0f8170fd0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_read_frame_preamble_main read frame preamble failed r=-1 2026-03-20T11:45:27.016 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.018+0000 7fc0fd9a7640 1 --2- 192.168.123.100:0/2451556144 >> v2:192.168.123.100:3300/0 conn(0x7fc0f8150bf0 0x7fc0f8170fd0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0)._fault waiting 0.200000 2026-03-20T11:45:27.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.029+0000 7fc0fd1a6640 1 -- 192.168.123.100:0/2451556144 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3824914885 0 0) 0x7fc0f805b1f0 con 0x7fc0f8058d20 2026-03-20T11:45:27.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.029+0000 7fc0fd1a6640 1 -- 192.168.123.100:0/2451556144 --> v1:192.168.123.100:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fc0e8003610 con 0x7fc0f8058d20 2026-03-20T11:45:27.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.029+0000 7fc0fd1a6640 1 -- 192.168.123.100:0/2451556144 <== mon.0 v1:192.168.123.100:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 4195582048 0 0) 0x7fc0e8003610 con 0x7fc0f8058d20 2026-03-20T11:45:27.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.029+0000 7fc0fd1a6640 1 -- 192.168.123.100:0/2451556144 >> v2:192.168.123.100:3300/0 conn(0x7fc0f8150bf0 msgr2=0x7fc0f8170fd0 unknown :-1 s=STATE_CONNECTING l=0).mark_down 2026-03-20T11:45:27.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.029+0000 7fc0fd1a6640 1 --2- 192.168.123.100:0/2451556144 >> v2:192.168.123.100:3300/0 conn(0x7fc0f8150bf0 0x7fc0f8170fd0 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.029+0000 7fc0fd1a6640 1 -- 192.168.123.100:0/2451556144 --> v1:192.168.123.100:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc0f805aee0 con 0x7fc0f8058d20 2026-03-20T11:45:27.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.029+0000 7fc0fd1a6640 1 -- 192.168.123.100:0/2451556144 <== mon.0 v1:192.168.123.100:6789/0 3 ==== mon_map magic: 0 ==== 205+0+0 (unknown 2760865362 0 0) 0x7fc0ec003100 con 0x7fc0f8058d20 2026-03-20T11:45:27.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.030+0000 7fc0fd1a6640 1 -- 192.168.123.100:0/2451556144 >> v1:192.168.123.100:6789/0 conn(0x7fc0f8058d20 legacy=0x7fc0f80590f0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.030+0000 7fc0fd1a6640 1 --2- 192.168.123.100:0/2451556144 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc0e8003b40 0x7fc0e8003f10 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:27.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.030+0000 7fc0fd1a6640 1 -- 192.168.123.100:0/2451556144 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fc0f805abd0 con 0x7fc0e8003b40 2026-03-20T11:45:27.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.030+0000 7fc0fe1a8640 1 --2- 192.168.123.100:0/2451556144 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc0e8003b40 0x7fc0e8003f10 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:27.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.030+0000 7fc0fe1a8640 1 -- 192.168.123.100:0/2451556144 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc0e8003610 con 0x7fc0e8003b40 2026-03-20T11:45:27.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.031+0000 7fc0fe1a8640 1 --2- 192.168.123.100:0/2451556144 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc0e8003b40 0x7fc0e8003f10 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7fc0ec003530 tx=0x7fc0ec030190 comp rx=0 tx=0).ready entity=mon.0 client_cookie=386214527cc3236 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:27.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.031+0000 7fc0fd1a6640 1 -- 192.168.123.100:0/2451556144 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc0ec03d070 con 0x7fc0e8003b40 2026-03-20T11:45:27.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.032+0000 7fc0fd1a6640 1 -- 192.168.123.100:0/2451556144 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fc0ec030ac0 con 0x7fc0e8003b40 2026-03-20T11:45:27.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.032+0000 7fc0fd1a6640 1 -- 192.168.123.100:0/2451556144 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc0ec00b040 con 0x7fc0e8003b40 2026-03-20T11:45:27.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.032+0000 7fc100433640 1 -- 192.168.123.100:0/2451556144 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc0e8003b40 msgr2=0x7fc0e8003f10 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.032+0000 7fc100433640 1 --2- 192.168.123.100:0/2451556144 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc0e8003b40 0x7fc0e8003f10 secure :-1 s=READY pgs=2 cs=0 l=1 rev1=1 crypto rx=0x7fc0ec003530 tx=0x7fc0ec030190 comp rx=0 tx=0).stop 2026-03-20T11:45:27.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.033+0000 7fc100433640 1 -- 192.168.123.100:0/2451556144 shutdown_connections 2026-03-20T11:45:27.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.033+0000 7fc100433640 1 --2- 192.168.123.100:0/2451556144 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc0e8003b40 0x7fc0e8003f10 unknown :-1 s=CLOSED pgs=2 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.033+0000 7fc100433640 1 --2- 192.168.123.100:0/2451556144 >> v2:192.168.123.100:3300/0 conn(0x7fc0f8150bf0 0x7fc0f8170fd0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.033+0000 7fc100433640 1 -- 192.168.123.100:0/2451556144 >> 192.168.123.100:0/2451556144 conn(0x7fc0f8082bf0 msgr2=0x7fc0f8082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:27.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.033+0000 7fc100433640 1 -- 192.168.123.100:0/2451556144 shutdown_connections 2026-03-20T11:45:27.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.033+0000 7fc100433640 1 -- 192.168.123.100:0/2451556144 wait complete. 2026-03-20T11:45:27.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.034+0000 7fc100433640 1 Processor -- start 2026-03-20T11:45:27.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.034+0000 7fc100433640 1 -- start start 2026-03-20T11:45:27.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.034+0000 7fc100433640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc0e8003b40 0x7fc0f8121fc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:27.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.034+0000 7fc100433640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fc0f8171510 con 0x7fc0e8003b40 2026-03-20T11:45:27.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.034+0000 7fc0fe1a8640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc0e8003b40 0x7fc0f8121fc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:27.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.034+0000 7fc0fe1a8640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc0e8003b40 0x7fc0f8121fc0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:34778/0 (socket says 192.168.123.100:34778) 2026-03-20T11:45:27.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.034+0000 7fc0fe1a8640 1 -- 192.168.123.100:0/3719985953 learned_addr learned my addr 192.168.123.100:0/3719985953 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:27.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.034+0000 7fc0fe1a8640 1 -- 192.168.123.100:0/3719985953 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc0f810ce70 con 0x7fc0e8003b40 2026-03-20T11:45:27.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.034+0000 7fc0fe1a8640 1 --2- 192.168.123.100:0/3719985953 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc0e8003b40 0x7fc0f8121fc0 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7fc0ec00aae0 tx=0x7fc0ec006d50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:27.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.035+0000 7fc0e2ffd640 1 -- 192.168.123.100:0/3719985953 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc0ec048070 con 0x7fc0e8003b40 2026-03-20T11:45:27.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.035+0000 7fc0e2ffd640 1 -- 192.168.123.100:0/3719985953 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fc0ec0060d0 con 0x7fc0e8003b40 2026-03-20T11:45:27.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.035+0000 7fc0e2ffd640 1 -- 192.168.123.100:0/3719985953 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc0ec03d040 con 0x7fc0e8003b40 2026-03-20T11:45:27.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.035+0000 7fc100433640 1 -- 192.168.123.100:0/3719985953 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fc0f810c030 con 0x7fc0e8003b40 2026-03-20T11:45:27.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.035+0000 7fc100433640 1 -- 192.168.123.100:0/3719985953 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fc0f8121a30 con 0x7fc0e8003b40 2026-03-20T11:45:27.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.035+0000 7fc0e2ffd640 1 -- 192.168.123.100:0/3719985953 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 939+0+0 (secure 0 0 0) 0x7fc0ec0063b0 con 0x7fc0e8003b40 2026-03-20T11:45:27.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.035+0000 7fc0e2ffd640 1 -- 192.168.123.100:0/3719985953 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 720+0+0 (secure 0 0 0) 0x7fc0ec04ddb0 con 0x7fc0e8003b40 2026-03-20T11:45:27.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.035+0000 7fc100433640 1 -- 192.168.123.100:0/3719985953 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fc0c4005180 con 0x7fc0e8003b40 2026-03-20T11:45:27.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.037+0000 7fc0e2ffd640 1 -- 192.168.123.100:0/3719985953 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+77519 (secure 0 0 0) 0x7fc0ec030a30 con 0x7fc0e8003b40 2026-03-20T11:45:27.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.073+0000 7fc100433640 1 -- 192.168.123.100:0/3719985953 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "config dump"} v 0) -- 0x7fc0c4005740 con 0x7fc0e8003b40 2026-03-20T11:45:27.072 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.073+0000 7fc0e2ffd640 1 -- 192.168.123.100:0/3719985953 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "config dump"}]=0 v1) ==== 59+0+36 (secure 0 0 0) 0x7fc0ec04d440 con 0x7fc0e8003b40 2026-03-20T11:45:27.072 INFO:teuthology.orchestra.run.vm00.stdout:WHO MASK LEVEL OPTION VALUE RO 2026-03-20T11:45:27.072 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.074+0000 7fc100433640 1 -- 192.168.123.100:0/3719985953 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc0e8003b40 msgr2=0x7fc0f8121fc0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.074+0000 7fc100433640 1 --2- 192.168.123.100:0/3719985953 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc0e8003b40 0x7fc0f8121fc0 secure :-1 s=READY pgs=3 cs=0 l=1 rev1=1 crypto rx=0x7fc0ec00aae0 tx=0x7fc0ec006d50 comp rx=0 tx=0).stop 2026-03-20T11:45:27.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.074+0000 7fc100433640 1 -- 192.168.123.100:0/3719985953 shutdown_connections 2026-03-20T11:45:27.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.074+0000 7fc100433640 1 --2- 192.168.123.100:0/3719985953 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc0e8003b40 0x7fc0f8121fc0 unknown :-1 s=CLOSED pgs=3 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.074+0000 7fc100433640 1 -- 192.168.123.100:0/3719985953 >> 192.168.123.100:0/3719985953 conn(0x7fc0f8082bf0 msgr2=0x7fc0f80581e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:27.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.074+0000 7fc100433640 1 -- 192.168.123.100:0/3719985953 shutdown_connections 2026-03-20T11:45:27.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.074+0000 7fc100433640 1 -- 192.168.123.100:0/3719985953 wait complete. 2026-03-20T11:45:27.083 INFO:tasks.ceph:Setting crush tunables to default 2026-03-20T11:45:27.083 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph osd crush tunables default 2026-03-20T11:45:27.157 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.159+0000 7fcbe4624640 1 Processor -- start 2026-03-20T11:45:27.157 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.159+0000 7fcbe4624640 1 -- start start 2026-03-20T11:45:27.158 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.159+0000 7fcbe4624640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7fcbdc1516e0 0x7fcbdc171ac0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:27.158 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.159+0000 7fcbe4624640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7fcbdc058680 con 0x7fcbdc130870 2026-03-20T11:45:27.158 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.159+0000 7fcbe4624640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7fcbdc0592f0 con 0x7fcbdc1516e0 2026-03-20T11:45:27.158 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.160+0000 7fcbe1b98640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7fcbdc1516e0 0x7fcbdc171ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:27.158 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.160+0000 7fcbe1b98640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7fcbdc1516e0 0x7fcbdc171ac0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:34788/0 (socket says 192.168.123.100:34788) 2026-03-20T11:45:27.158 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.160+0000 7fcbe1b98640 1 -- 192.168.123.100:0/3491495946 learned_addr learned my addr 192.168.123.100:0/3491495946 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:27.158 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.160+0000 7fcbe1b98640 1 -- 192.168.123.100:0/3491495946 >> v1:192.168.123.100:6789/0 conn(0x7fcbdc130870 legacy=0x7fcbdc130c40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).mark_down 2026-03-20T11:45:27.158 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.160+0000 7fcbe1b98640 1 -- 192.168.123.100:0/3491495946 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fcbdc059600 con 0x7fcbdc1516e0 2026-03-20T11:45:27.159 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.161+0000 7fcbe1b98640 1 --2- 192.168.123.100:0/3491495946 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcbdc1516e0 0x7fcbdc171ac0 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7fcbd0009080 tx=0x7fcbd002ee70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=285553ff52a6717 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:27.159 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.161+0000 7fcbe1397640 1 -- 192.168.123.100:0/3491495946 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fcbd003c070 con 0x7fcbdc1516e0 2026-03-20T11:45:27.159 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.161+0000 7fcbe1397640 1 -- 192.168.123.100:0/3491495946 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fcbd002fab0 con 0x7fcbdc1516e0 2026-03-20T11:45:27.159 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.161+0000 7fcbe1397640 1 -- 192.168.123.100:0/3491495946 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fcbd002fdb0 con 0x7fcbdc1516e0 2026-03-20T11:45:27.159 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.161+0000 7fcbe4624640 1 -- 192.168.123.100:0/3491495946 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcbdc1516e0 msgr2=0x7fcbdc171ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.159 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.161+0000 7fcbe4624640 1 --2- 192.168.123.100:0/3491495946 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcbdc1516e0 0x7fcbdc171ac0 secure :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0x7fcbd0009080 tx=0x7fcbd002ee70 comp rx=0 tx=0).stop 2026-03-20T11:45:27.159 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.161+0000 7fcbe4624640 1 -- 192.168.123.100:0/3491495946 shutdown_connections 2026-03-20T11:45:27.159 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.161+0000 7fcbe4624640 1 --2- 192.168.123.100:0/3491495946 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcbdc1516e0 0x7fcbdc171ac0 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.159 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.161+0000 7fcbe4624640 1 -- 192.168.123.100:0/3491495946 >> 192.168.123.100:0/3491495946 conn(0x7fcbdc082930 msgr2=0x7fcbdc082d30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:27.159 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.161+0000 7fcbe4624640 1 -- 192.168.123.100:0/3491495946 shutdown_connections 2026-03-20T11:45:27.159 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.161+0000 7fcbe4624640 1 -- 192.168.123.100:0/3491495946 wait complete. 2026-03-20T11:45:27.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.162+0000 7fcbe4624640 1 Processor -- start 2026-03-20T11:45:27.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.162+0000 7fcbe4624640 1 -- start start 2026-03-20T11:45:27.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.162+0000 7fcbe4624640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcbdc130870 0x7fcbdc07ba90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:27.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.162+0000 7fcbe4624640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fcbdc172390 con 0x7fcbdc130870 2026-03-20T11:45:27.160 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.163+0000 7fcbe2399640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcbdc130870 0x7fcbdc07ba90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:27.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.163+0000 7fcbe2399640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcbdc130870 0x7fcbdc07ba90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:34798/0 (socket says 192.168.123.100:34798) 2026-03-20T11:45:27.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.163+0000 7fcbe2399640 1 -- 192.168.123.100:0/4049944784 learned_addr learned my addr 192.168.123.100:0/4049944784 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:27.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.163+0000 7fcbe2399640 1 -- 192.168.123.100:0/4049944784 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fcbdc07d510 con 0x7fcbdc130870 2026-03-20T11:45:27.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.163+0000 7fcbe2399640 1 --2- 192.168.123.100:0/4049944784 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcbdc130870 0x7fcbdc07ba90 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fcbcc0036b0 tx=0x7fcbcc00cb20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:27.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.163+0000 7fcbc2ffd640 1 -- 192.168.123.100:0/4049944784 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fcbcc016020 con 0x7fcbdc130870 2026-03-20T11:45:27.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.163+0000 7fcbc2ffd640 1 -- 192.168.123.100:0/4049944784 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fcbcc005020 con 0x7fcbdc130870 2026-03-20T11:45:27.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.163+0000 7fcbc2ffd640 1 -- 192.168.123.100:0/4049944784 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fcbcc005300 con 0x7fcbdc130870 2026-03-20T11:45:27.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.163+0000 7fcbe4624640 1 -- 192.168.123.100:0/4049944784 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fcbdc07da60 con 0x7fcbdc130870 2026-03-20T11:45:27.161 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.163+0000 7fcbe4624640 1 -- 192.168.123.100:0/4049944784 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fcbdc07dd20 con 0x7fcbdc130870 2026-03-20T11:45:27.162 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.164+0000 7fcbe4624640 1 -- 192.168.123.100:0/4049944784 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fcbdc07cdf0 con 0x7fcbdc130870 2026-03-20T11:45:27.163 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.165+0000 7fcbc2ffd640 1 -- 192.168.123.100:0/4049944784 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 939+0+0 (secure 0 0 0) 0x7fcbcc006cd0 con 0x7fcbdc130870 2026-03-20T11:45:27.163 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.165+0000 7fcbc2ffd640 1 -- 192.168.123.100:0/4049944784 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(1..1 src has 1..1) ==== 720+0+0 (secure 0 0 0) 0x7fcbcc012420 con 0x7fcbdc130870 2026-03-20T11:45:27.163 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.165+0000 7fcbc2ffd640 1 -- 192.168.123.100:0/4049944784 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+77519 (secure 0 0 0) 0x7fcbcc00a660 con 0x7fcbdc130870 2026-03-20T11:45:27.199 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.201+0000 7fcbe4624640 1 -- 192.168.123.100:0/4049944784 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd crush tunables", "profile": "default"} v 0) -- 0x7fcbdc130c40 con 0x7fcbdc130870 2026-03-20T11:45:27.200 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.202+0000 7fcbc2ffd640 1 -- 192.168.123.100:0/4049944784 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd crush tunables", "profile": "default"}]=0 adjusted tunables profile to default v2) ==== 124+0+0 (secure 0 0 0) 0x7fcbcc0054a0 con 0x7fcbdc130870 2026-03-20T11:45:27.200 INFO:teuthology.orchestra.run.vm00.stderr:adjusted tunables profile to default 2026-03-20T11:45:27.201 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.203+0000 7fcbe4624640 1 -- 192.168.123.100:0/4049944784 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcbdc130870 msgr2=0x7fcbdc07ba90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.201 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.203+0000 7fcbe4624640 1 --2- 192.168.123.100:0/4049944784 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcbdc130870 0x7fcbdc07ba90 secure :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0x7fcbcc0036b0 tx=0x7fcbcc00cb20 comp rx=0 tx=0).stop 2026-03-20T11:45:27.203 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.205+0000 7fcbe4624640 1 -- 192.168.123.100:0/4049944784 shutdown_connections 2026-03-20T11:45:27.203 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.205+0000 7fcbe4624640 1 --2- 192.168.123.100:0/4049944784 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fcbdc130870 0x7fcbdc07ba90 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.203 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.205+0000 7fcbe4624640 1 -- 192.168.123.100:0/4049944784 >> 192.168.123.100:0/4049944784 conn(0x7fcbdc082930 msgr2=0x7fcbdc05af30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:27.204 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.206+0000 7fcbe4624640 1 -- 192.168.123.100:0/4049944784 shutdown_connections 2026-03-20T11:45:27.204 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.206+0000 7fcbe4624640 1 -- 192.168.123.100:0/4049944784 wait complete. 2026-03-20T11:45:27.215 INFO:tasks.ceph:check_enable_crimson: False 2026-03-20T11:45:27.216 INFO:tasks.ceph:Starting osd daemons in cluster ceph... 2026-03-20T11:45:27.216 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:27.216 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-0/fsid of=/dev/stdout 2026-03-20T11:45:27.242 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:27.242 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-1/fsid of=/dev/stdout 2026-03-20T11:45:27.308 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:45:27.309 DEBUG:teuthology.orchestra.run.vm00:> sudo dd if=/var/lib/ceph/osd/ceph-2/fsid of=/dev/stdout 2026-03-20T11:45:27.385 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph osd new 232f165d-e880-471c-ad41-9cbb77b50aed 0 2026-03-20T11:45:27.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.506+0000 7f69945fa640 1 Processor -- start 2026-03-20T11:45:27.504 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.506+0000 7f69945fa640 1 -- start start 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.506+0000 7f69945fa640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f698c130410 0x7f698c177e60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.506+0000 7f69945fa640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f698c05a0f0 con 0x7f698c130860 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.506+0000 7f69945fa640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f698c05a9c0 con 0x7f698c130410 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.507+0000 7f6991b6e640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f698c130410 0x7f698c177e60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.507+0000 7f699236f640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7f698c130860 0x7f698c130c30 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:34694/0 (socket says 192.168.123.100:34694) 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.507+0000 7f699236f640 1 -- 192.168.123.100:0/3277258650 learned_addr learned my addr 192.168.123.100:0/3277258650 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.507+0000 7f699136d640 1 -- 192.168.123.100:0/3277258650 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1983570196 0 0) 0x7f698c05a0f0 con 0x7f698c130860 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.507+0000 7f699136d640 1 -- 192.168.123.100:0/3277258650 --> v1:192.168.123.100:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f6978003610 con 0x7f698c130860 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.507+0000 7f6991b6e640 1 -- 192.168.123.100:0/3277258650 >> v1:192.168.123.100:6789/0 conn(0x7f698c130860 legacy=0x7f698c130c30 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.507+0000 7f6991b6e640 1 -- 192.168.123.100:0/3277258650 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f698c1793b0 con 0x7f698c130410 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.507+0000 7f6991b6e640 1 --2- 192.168.123.100:0/3277258650 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f698c130410 0x7f698c177e60 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f69740097b0 tx=0x7f697402eda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=575e8e9ea451f72e server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.507+0000 7f699136d640 1 -- 192.168.123.100:0/3277258650 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f697403c070 con 0x7f698c130410 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.507+0000 7f699136d640 1 -- 192.168.123.100:0/3277258650 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f697402f9e0 con 0x7f698c130410 2026-03-20T11:45:27.505 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.507+0000 7f699136d640 1 -- 192.168.123.100:0/3277258650 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f697402fce0 con 0x7f698c130410 2026-03-20T11:45:27.506 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.508+0000 7f69945fa640 1 -- 192.168.123.100:0/3277258650 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f698c130410 msgr2=0x7f698c177e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.506 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.508+0000 7f69945fa640 1 --2- 192.168.123.100:0/3277258650 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f698c130410 0x7f698c177e60 secure :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0x7f69740097b0 tx=0x7f697402eda0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.506 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.508+0000 7f69945fa640 1 -- 192.168.123.100:0/3277258650 shutdown_connections 2026-03-20T11:45:27.506 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.508+0000 7f69945fa640 1 --2- 192.168.123.100:0/3277258650 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f698c130410 0x7f698c177e60 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.506 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.508+0000 7f69945fa640 1 -- 192.168.123.100:0/3277258650 >> 192.168.123.100:0/3277258650 conn(0x7f698c082930 msgr2=0x7f698c082d30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:27.506 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.508+0000 7f69945fa640 1 -- 192.168.123.100:0/3277258650 shutdown_connections 2026-03-20T11:45:27.506 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.508+0000 7f69945fa640 1 -- 192.168.123.100:0/3277258650 wait complete. 2026-03-20T11:45:27.506 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.508+0000 7f69945fa640 1 Processor -- start 2026-03-20T11:45:27.506 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.509+0000 7f69945fa640 1 -- start start 2026-03-20T11:45:27.507 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.509+0000 7f69945fa640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f698c130410 0x7f698c076d70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:27.507 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.509+0000 7f69945fa640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f698c178730 con 0x7f698c130410 2026-03-20T11:45:27.507 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.509+0000 7f699236f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f698c130410 0x7f698c076d70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:27.507 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.509+0000 7f699236f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f698c130410 0x7f698c076d70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:34818/0 (socket says 192.168.123.100:34818) 2026-03-20T11:45:27.507 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.509+0000 7f699236f640 1 -- 192.168.123.100:0/1323016788 learned_addr learned my addr 192.168.123.100:0/1323016788 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:27.507 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.509+0000 7f699236f640 1 -- 192.168.123.100:0/1323016788 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f698c080280 con 0x7f698c130410 2026-03-20T11:45:27.507 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.509+0000 7f699236f640 1 --2- 192.168.123.100:0/1323016788 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f698c130410 0x7f698c076d70 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f697c007c40 tx=0x7f697c00cb20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:27.508 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.509+0000 7f6982ffd640 1 -- 192.168.123.100:0/1323016788 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f697c017070 con 0x7f698c130410 2026-03-20T11:45:27.508 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.509+0000 7f6982ffd640 1 -- 192.168.123.100:0/1323016788 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f697c0059a0 con 0x7f698c130410 2026-03-20T11:45:27.508 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.509+0000 7f69945fa640 1 -- 192.168.123.100:0/1323016788 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f698c077f30 con 0x7f698c130410 2026-03-20T11:45:27.508 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.509+0000 7f6982ffd640 1 -- 192.168.123.100:0/1323016788 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f697c005ca0 con 0x7f698c130410 2026-03-20T11:45:27.508 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.509+0000 7f69945fa640 1 -- 192.168.123.100:0/1323016788 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f698c078e20 con 0x7f698c130410 2026-03-20T11:45:27.508 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.510+0000 7f6982ffd640 1 -- 192.168.123.100:0/1323016788 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 939+0+0 (secure 0 0 0) 0x7f697c007480 con 0x7f698c130410 2026-03-20T11:45:27.508 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.510+0000 7f6982ffd640 1 -- 192.168.123.100:0/1323016788 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(2..2 src has 1..2) ==== 720+0+0 (secure 0 0 0) 0x7f697c01c6b0 con 0x7f698c130410 2026-03-20T11:45:27.508 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.510+0000 7f69945fa640 1 -- 192.168.123.100:0/1323016788 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6950005180 con 0x7f698c130410 2026-03-20T11:45:27.510 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.512+0000 7f6982ffd640 1 -- 192.168.123.100:0/1323016788 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+77519 (secure 0 0 0) 0x7f697c01c900 con 0x7f698c130410 2026-03-20T11:45:27.555 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.557+0000 7f69945fa640 1 -- 192.168.123.100:0/1323016788 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd new", "uuid": "232f165d-e880-471c-ad41-9cbb77b50aed", "id": 0} v 0) -- 0x7f6950005470 con 0x7f698c130410 2026-03-20T11:45:27.557 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.559+0000 7f6982ffd640 1 -- 192.168.123.100:0/1323016788 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd new", "uuid": "232f165d-e880-471c-ad41-9cbb77b50aed", "id": 0}]=0 v3) ==== 112+0+2 (secure 0 0 0) 0x7f697c01cae0 con 0x7f698c130410 2026-03-20T11:45:27.558 INFO:teuthology.orchestra.run.vm00.stdout:0 2026-03-20T11:45:27.561 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.563+0000 7f69945fa640 1 -- 192.168.123.100:0/1323016788 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f698c130410 msgr2=0x7f698c076d70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.561 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.563+0000 7f69945fa640 1 --2- 192.168.123.100:0/1323016788 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f698c130410 0x7f698c076d70 secure :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0x7f697c007c40 tx=0x7f697c00cb20 comp rx=0 tx=0).stop 2026-03-20T11:45:27.561 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.563+0000 7f69945fa640 1 -- 192.168.123.100:0/1323016788 shutdown_connections 2026-03-20T11:45:27.561 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.563+0000 7f69945fa640 1 --2- 192.168.123.100:0/1323016788 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f698c130410 0x7f698c076d70 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.561 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.563+0000 7f69945fa640 1 -- 192.168.123.100:0/1323016788 >> 192.168.123.100:0/1323016788 conn(0x7f698c082930 msgr2=0x7f698c05acd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:27.561 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.563+0000 7f69945fa640 1 -- 192.168.123.100:0/1323016788 shutdown_connections 2026-03-20T11:45:27.561 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.563+0000 7f69945fa640 1 -- 192.168.123.100:0/1323016788 wait complete. 2026-03-20T11:45:27.571 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph osd new 59a8c5e0-6c84-431b-ac69-a2f3326598f8 1 2026-03-20T11:45:27.650 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.652+0000 7fb910aae640 1 Processor -- start 2026-03-20T11:45:27.651 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.652+0000 7fb910aae640 1 -- start start 2026-03-20T11:45:27.651 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.653+0000 7fb910aae640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7fb90c07f990 0x7fb90c07fd60 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:27.651 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.653+0000 7fb910aae640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7fb90c05af60 con 0x7fb90c058970 2026-03-20T11:45:27.651 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.653+0000 7fb910aae640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7fb90c059e10 con 0x7fb90c07f990 2026-03-20T11:45:27.651 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.653+0000 7fb909d74640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7fb90c058970 0x7fb90c058dd0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:34706/0 (socket says 192.168.123.100:34706) 2026-03-20T11:45:27.651 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.653+0000 7fb909d74640 1 -- 192.168.123.100:0/1918835543 learned_addr learned my addr 192.168.123.100:0/1918835543 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:27.651 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.653+0000 7fb90a575640 1 --2- 192.168.123.100:0/1918835543 >> v2:192.168.123.100:3300/0 conn(0x7fb90c07f990 0x7fb90c07fd60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:27.651 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.653+0000 7fb90a575640 1 -- 192.168.123.100:0/1918835543 >> v1:192.168.123.100:6789/0 conn(0x7fb90c058970 legacy=0x7fb90c058dd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.651 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.653+0000 7fb90a575640 1 -- 192.168.123.100:0/1918835543 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb90c05b270 con 0x7fb90c07f990 2026-03-20T11:45:27.651 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.654+0000 7fb90a575640 1 --2- 192.168.123.100:0/1918835543 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb90c07f990 0x7fb90c07fd60 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7fb8f8004770 tx=0x7fb8f8030210 comp rx=0 tx=0).ready entity=mon.0 client_cookie=359c1c176180df95 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:27.652 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.654+0000 7fb909573640 1 -- 192.168.123.100:0/1918835543 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb8f803d070 con 0x7fb90c07f990 2026-03-20T11:45:27.652 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.654+0000 7fb909573640 1 -- 192.168.123.100:0/1918835543 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fb8f8030dd0 con 0x7fb90c07f990 2026-03-20T11:45:27.652 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.654+0000 7fb909573640 1 -- 192.168.123.100:0/1918835543 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb8f80385b0 con 0x7fb90c07f990 2026-03-20T11:45:27.652 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.654+0000 7fb910aae640 1 -- 192.168.123.100:0/1918835543 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb90c07f990 msgr2=0x7fb90c07fd60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.652 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.654+0000 7fb910aae640 1 --2- 192.168.123.100:0/1918835543 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb90c07f990 0x7fb90c07fd60 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7fb8f8004770 tx=0x7fb8f8030210 comp rx=0 tx=0).stop 2026-03-20T11:45:27.652 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.654+0000 7fb910aae640 1 -- 192.168.123.100:0/1918835543 shutdown_connections 2026-03-20T11:45:27.652 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.654+0000 7fb910aae640 1 --2- 192.168.123.100:0/1918835543 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb90c07f990 0x7fb90c07fd60 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.652 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.654+0000 7fb910aae640 1 -- 192.168.123.100:0/1918835543 >> 192.168.123.100:0/1918835543 conn(0x7fb90c082930 msgr2=0x7fb90c082d30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:27.652 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.654+0000 7fb910aae640 1 -- 192.168.123.100:0/1918835543 shutdown_connections 2026-03-20T11:45:27.652 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.654+0000 7fb910aae640 1 -- 192.168.123.100:0/1918835543 wait complete. 2026-03-20T11:45:27.653 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.655+0000 7fb910aae640 1 Processor -- start 2026-03-20T11:45:27.653 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.655+0000 7fb910aae640 1 -- start start 2026-03-20T11:45:27.653 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.655+0000 7fb910aae640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb90c058970 0x7fb90c1c9c70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:27.653 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.655+0000 7fb910aae640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fb90c057ec0 con 0x7fb90c058970 2026-03-20T11:45:27.654 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.656+0000 7fb90a575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb90c058970 0x7fb90c1c9c70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:27.654 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.656+0000 7fb90a575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb90c058970 0x7fb90c1c9c70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:34838/0 (socket says 192.168.123.100:34838) 2026-03-20T11:45:27.654 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.656+0000 7fb90a575640 1 -- 192.168.123.100:0/487167766 learned_addr learned my addr 192.168.123.100:0/487167766 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:27.654 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.656+0000 7fb90a575640 1 -- 192.168.123.100:0/487167766 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fb90c1b4b20 con 0x7fb90c058970 2026-03-20T11:45:27.654 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.656+0000 7fb90a575640 1 --2- 192.168.123.100:0/487167766 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb90c058970 0x7fb90c1c9c70 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7fb8f800ae10 tx=0x7fb8f80047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:27.654 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.656+0000 7fb8f6ffd640 1 -- 192.168.123.100:0/487167766 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb8f803d040 con 0x7fb90c058970 2026-03-20T11:45:27.655 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.656+0000 7fb8f6ffd640 1 -- 192.168.123.100:0/487167766 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fb8f8005c60 con 0x7fb90c058970 2026-03-20T11:45:27.655 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.656+0000 7fb8f6ffd640 1 -- 192.168.123.100:0/487167766 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fb8f8038a40 con 0x7fb90c058970 2026-03-20T11:45:27.655 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.656+0000 7fb910aae640 1 -- 192.168.123.100:0/487167766 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fb90c1b3ce0 con 0x7fb90c058970 2026-03-20T11:45:27.655 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.656+0000 7fb910aae640 1 -- 192.168.123.100:0/487167766 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fb90c1c96e0 con 0x7fb90c058970 2026-03-20T11:45:27.655 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.656+0000 7fb8f6ffd640 1 -- 192.168.123.100:0/487167766 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 939+0+0 (secure 0 0 0) 0x7fb8f8040070 con 0x7fb90c058970 2026-03-20T11:45:27.655 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.656+0000 7fb8f6ffd640 1 -- 192.168.123.100:0/487167766 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(3..3 src has 1..3) ==== 835+0+0 (secure 0 0 0) 0x7fb8f8046020 con 0x7fb90c058970 2026-03-20T11:45:27.655 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.657+0000 7fb910aae640 1 -- 192.168.123.100:0/487167766 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fb8cc005180 con 0x7fb90c058970 2026-03-20T11:45:27.658 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.660+0000 7fb8f6ffd640 1 -- 192.168.123.100:0/487167766 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+77519 (secure 0 0 0) 0x7fb8f8041bf0 con 0x7fb90c058970 2026-03-20T11:45:27.697 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.699+0000 7fb910aae640 1 -- 192.168.123.100:0/487167766 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd new", "uuid": "59a8c5e0-6c84-431b-ac69-a2f3326598f8", "id": 1} v 0) -- 0x7fb8cc005470 con 0x7fb90c058970 2026-03-20T11:45:27.699 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.701+0000 7fb8f6ffd640 1 -- 192.168.123.100:0/487167766 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd new", "uuid": "59a8c5e0-6c84-431b-ac69-a2f3326598f8", "id": 1}]=0 v4) ==== 112+0+2 (secure 0 0 0) 0x7fb8f8038be0 con 0x7fb90c058970 2026-03-20T11:45:27.699 INFO:teuthology.orchestra.run.vm00.stdout:1 2026-03-20T11:45:27.700 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.702+0000 7fb910aae640 1 -- 192.168.123.100:0/487167766 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb90c058970 msgr2=0x7fb90c1c9c70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.702+0000 7fb910aae640 1 --2- 192.168.123.100:0/487167766 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb90c058970 0x7fb90c1c9c70 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7fb8f800ae10 tx=0x7fb8f80047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.703+0000 7fb910aae640 1 -- 192.168.123.100:0/487167766 shutdown_connections 2026-03-20T11:45:27.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.703+0000 7fb910aae640 1 --2- 192.168.123.100:0/487167766 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fb90c058970 0x7fb90c1c9c70 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.703+0000 7fb910aae640 1 -- 192.168.123.100:0/487167766 >> 192.168.123.100:0/487167766 conn(0x7fb90c082930 msgr2=0x7fb90c07dc90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:27.701 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.703+0000 7fb910aae640 1 -- 192.168.123.100:0/487167766 shutdown_connections 2026-03-20T11:45:27.705 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.707+0000 7fb910aae640 1 -- 192.168.123.100:0/487167766 wait complete. 2026-03-20T11:45:27.714 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph osd new 3e2deeca-bacd-4ce3-abce-84b4e72b511b 2 2026-03-20T11:45:27.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.794+0000 7ff2a3f45640 1 Processor -- start 2026-03-20T11:45:27.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.794+0000 7ff2a3f45640 1 -- start start 2026-03-20T11:45:27.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.794+0000 7ff2a3f45640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7ff29c1516e0 0x7ff29c171ac0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:27.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.794+0000 7ff2a3f45640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7ff29c058680 con 0x7ff29c130870 2026-03-20T11:45:27.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.794+0000 7ff2a3f45640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7ff29c0592f0 con 0x7ff29c1516e0 2026-03-20T11:45:27.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.794+0000 7ff2a14b9640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7ff29c1516e0 0x7ff29c171ac0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:27.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.794+0000 7ff2a1cba640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7ff29c130870 0x7ff29c130c40 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:34712/0 (socket says 192.168.123.100:34712) 2026-03-20T11:45:27.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.794+0000 7ff2a1cba640 1 -- 192.168.123.100:0/1865900474 learned_addr learned my addr 192.168.123.100:0/1865900474 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:27.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.794+0000 7ff2a0cb8640 1 -- 192.168.123.100:0/1865900474 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 373561230 0 0) 0x7ff29c058680 con 0x7ff29c130870 2026-03-20T11:45:27.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.794+0000 7ff2a0cb8640 1 -- 192.168.123.100:0/1865900474 --> v1:192.168.123.100:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7ff288003610 con 0x7ff29c130870 2026-03-20T11:45:27.792 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.794+0000 7ff2a14b9640 1 -- 192.168.123.100:0/1865900474 >> v1:192.168.123.100:6789/0 conn(0x7ff29c130870 legacy=0x7ff29c130c40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.795+0000 7ff2a14b9640 1 -- 192.168.123.100:0/1865900474 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff29c059600 con 0x7ff29c1516e0 2026-03-20T11:45:27.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.795+0000 7ff2a14b9640 1 --2- 192.168.123.100:0/1865900474 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff29c1516e0 0x7ff29c171ac0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7ff28c004770 tx=0x7ff28c02eda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b8a3b690a5c1c380 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:27.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.795+0000 7ff2a0cb8640 1 -- 192.168.123.100:0/1865900474 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff28c03c070 con 0x7ff29c1516e0 2026-03-20T11:45:27.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.795+0000 7ff2a0cb8640 1 -- 192.168.123.100:0/1865900474 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7ff28c02f9e0 con 0x7ff29c1516e0 2026-03-20T11:45:27.793 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.795+0000 7ff2a0cb8640 1 -- 192.168.123.100:0/1865900474 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff28c02fcc0 con 0x7ff29c1516e0 2026-03-20T11:45:27.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.796+0000 7ff2a3f45640 1 -- 192.168.123.100:0/1865900474 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff29c1516e0 msgr2=0x7ff29c171ac0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.796+0000 7ff2a3f45640 1 --2- 192.168.123.100:0/1865900474 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff29c1516e0 0x7ff29c171ac0 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7ff28c004770 tx=0x7ff28c02eda0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.796+0000 7ff2a3f45640 1 -- 192.168.123.100:0/1865900474 shutdown_connections 2026-03-20T11:45:27.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.796+0000 7ff2a3f45640 1 --2- 192.168.123.100:0/1865900474 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff29c1516e0 0x7ff29c171ac0 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.796+0000 7ff2a3f45640 1 -- 192.168.123.100:0/1865900474 >> 192.168.123.100:0/1865900474 conn(0x7ff29c082930 msgr2=0x7ff29c082d30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:27.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.796+0000 7ff2a3f45640 1 -- 192.168.123.100:0/1865900474 shutdown_connections 2026-03-20T11:45:27.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.796+0000 7ff2a3f45640 1 -- 192.168.123.100:0/1865900474 wait complete. 2026-03-20T11:45:27.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.796+0000 7ff2a3f45640 1 Processor -- start 2026-03-20T11:45:27.794 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.796+0000 7ff2a3f45640 1 -- start start 2026-03-20T11:45:27.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.797+0000 7ff2a3f45640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff29c130870 0x7ff29c150ec0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:27.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.797+0000 7ff2a3f45640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7ff29c12fc60 con 0x7ff29c130870 2026-03-20T11:45:27.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.797+0000 7ff2a1cba640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff29c130870 0x7ff29c150ec0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:27.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.797+0000 7ff2a1cba640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff29c130870 0x7ff29c150ec0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:34862/0 (socket says 192.168.123.100:34862) 2026-03-20T11:45:27.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.797+0000 7ff2a1cba640 1 -- 192.168.123.100:0/2854917544 learned_addr learned my addr 192.168.123.100:0/2854917544 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:27.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.797+0000 7ff2a1cba640 1 -- 192.168.123.100:0/2854917544 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7ff29c076f40 con 0x7ff29c130870 2026-03-20T11:45:27.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.797+0000 7ff2a1cba640 1 --2- 192.168.123.100:0/2854917544 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff29c130870 0x7ff29c150ec0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7ff284007c40 tx=0x7ff28400cb20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:27.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.797+0000 7ff2927fc640 1 -- 192.168.123.100:0/2854917544 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff284017070 con 0x7ff29c130870 2026-03-20T11:45:27.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.797+0000 7ff2a3f45640 1 -- 192.168.123.100:0/2854917544 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7ff29c077ad0 con 0x7ff29c130870 2026-03-20T11:45:27.795 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.797+0000 7ff2a3f45640 1 -- 192.168.123.100:0/2854917544 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7ff29c0777c0 con 0x7ff29c130870 2026-03-20T11:45:27.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.798+0000 7ff2927fc640 1 -- 192.168.123.100:0/2854917544 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7ff2840059a0 con 0x7ff29c130870 2026-03-20T11:45:27.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.798+0000 7ff2927fc640 1 -- 192.168.123.100:0/2854917544 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7ff284005ca0 con 0x7ff29c130870 2026-03-20T11:45:27.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.798+0000 7ff2927fc640 1 -- 192.168.123.100:0/2854917544 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 939+0+0 (secure 0 0 0) 0x7ff284007480 con 0x7ff29c130870 2026-03-20T11:45:27.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.798+0000 7ff2927fc640 1 -- 192.168.123.100:0/2854917544 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(4..4 src has 1..4) ==== 950+0+0 (secure 0 0 0) 0x7ff28401cad0 con 0x7ff29c130870 2026-03-20T11:45:27.796 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.798+0000 7ff2a3f45640 1 -- 192.168.123.100:0/2854917544 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7ff29c130c40 con 0x7ff29c130870 2026-03-20T11:45:27.802 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.800+0000 7ff2927fc640 1 -- 192.168.123.100:0/2854917544 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+77519 (secure 0 0 0) 0x7ff28401cd40 con 0x7ff29c130870 2026-03-20T11:45:27.844 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.846+0000 7ff2a3f45640 1 -- 192.168.123.100:0/2854917544 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd new", "uuid": "3e2deeca-bacd-4ce3-abce-84b4e72b511b", "id": 2} v 0) -- 0x7ff29c038120 con 0x7ff29c130870 2026-03-20T11:45:27.846 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.848+0000 7ff2927fc640 1 -- 192.168.123.100:0/2854917544 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd new", "uuid": "3e2deeca-bacd-4ce3-abce-84b4e72b511b", "id": 2}]=0 v5) ==== 112+0+2 (secure 0 0 0) 0x7ff28401c6b0 con 0x7ff29c130870 2026-03-20T11:45:27.846 INFO:teuthology.orchestra.run.vm00.stdout:2 2026-03-20T11:45:27.846 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.849+0000 7ff2a3f45640 1 -- 192.168.123.100:0/2854917544 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff29c130870 msgr2=0x7ff29c150ec0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.847 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.849+0000 7ff2a3f45640 1 --2- 192.168.123.100:0/2854917544 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff29c130870 0x7ff29c150ec0 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7ff284007c40 tx=0x7ff28400cb20 comp rx=0 tx=0).stop 2026-03-20T11:45:27.847 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.849+0000 7ff2a3f45640 1 -- 192.168.123.100:0/2854917544 shutdown_connections 2026-03-20T11:45:27.847 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.849+0000 7ff2a3f45640 1 --2- 192.168.123.100:0/2854917544 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7ff29c130870 0x7ff29c150ec0 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:27.847 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.849+0000 7ff2a3f45640 1 -- 192.168.123.100:0/2854917544 >> 192.168.123.100:0/2854917544 conn(0x7ff29c082930 msgr2=0x7ff29c05c4e0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:27.847 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.849+0000 7ff2a3f45640 1 -- 192.168.123.100:0/2854917544 shutdown_connections 2026-03-20T11:45:27.847 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:27.849+0000 7ff2a3f45640 1 -- 192.168.123.100:0/2854917544 wait complete. 2026-03-20T11:45:27.859 INFO:tasks.ceph.osd.0:Restarting daemon 2026-03-20T11:45:27.859 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 0 2026-03-20T11:45:27.860 INFO:tasks.ceph.osd.0:Started 2026-03-20T11:45:27.860 INFO:tasks.ceph.osd.1:Restarting daemon 2026-03-20T11:45:27.860 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 1 2026-03-20T11:45:27.862 INFO:tasks.ceph.osd.1:Started 2026-03-20T11:45:27.862 INFO:tasks.ceph.osd.2:Restarting daemon 2026-03-20T11:45:27.862 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-osd -f --cluster ceph -i 2 2026-03-20T11:45:27.866 INFO:tasks.ceph.osd.2:Started 2026-03-20T11:45:27.866 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-20T11:45:27.999 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.001+0000 7f510f577640 1 Processor -- start 2026-03-20T11:45:27.999 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.001+0000 7f510f577640 1 -- start start 2026-03-20T11:45:27.999 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.001+0000 7f510f577640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f5110058da0 0x7f5110059170 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:27.999 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.001+0000 7f510f577640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f511005aea0 con 0x7f51100596b0 2026-03-20T11:45:27.999 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.001+0000 7f510f577640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f5110059c80 con 0x7f5110058da0 2026-03-20T11:45:27.999 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.001+0000 7f510e575640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f5110058da0 0x7f5110059170 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:27.999 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.001+0000 7f510dd74640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7f51100596b0 0x7f5110172ec0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:34766/0 (socket says 192.168.123.100:34766) 2026-03-20T11:45:27.999 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.001+0000 7f510dd74640 1 -- 192.168.123.100:0/1207215627 learned_addr learned my addr 192.168.123.100:0/1207215627 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:27.999 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.001+0000 7f510e575640 1 -- 192.168.123.100:0/1207215627 >> v1:192.168.123.100:6789/0 conn(0x7f51100596b0 legacy=0x7f5110172ec0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:27.999 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.001+0000 7f510e575640 1 -- 192.168.123.100:0/1207215627 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f511005ab90 con 0x7f5110058da0 2026-03-20T11:45:28.000 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.002+0000 7f510e575640 1 --2- 192.168.123.100:0/1207215627 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5110058da0 0x7f5110059170 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f50f8004770 tx=0x7f50f802eda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=823bd23241dca995 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:28.000 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.002+0000 7f510d573640 1 -- 192.168.123.100:0/1207215627 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f50f803c070 con 0x7f5110058da0 2026-03-20T11:45:28.000 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.002+0000 7f510d573640 1 -- 192.168.123.100:0/1207215627 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f50f802f9e0 con 0x7f5110058da0 2026-03-20T11:45:28.000 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.002+0000 7f510d573640 1 -- 192.168.123.100:0/1207215627 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f50f802fce0 con 0x7f5110058da0 2026-03-20T11:45:28.000 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.002+0000 7f510f577640 1 -- 192.168.123.100:0/1207215627 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5110058da0 msgr2=0x7f5110059170 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:28.000 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.002+0000 7f510f577640 1 --2- 192.168.123.100:0/1207215627 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5110058da0 0x7f5110059170 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f50f8004770 tx=0x7f50f802eda0 comp rx=0 tx=0).stop 2026-03-20T11:45:28.001 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.003+0000 7f510f577640 1 -- 192.168.123.100:0/1207215627 shutdown_connections 2026-03-20T11:45:28.001 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.003+0000 7f510f577640 1 --2- 192.168.123.100:0/1207215627 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5110058da0 0x7f5110059170 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:28.001 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.003+0000 7f510f577640 1 -- 192.168.123.100:0/1207215627 >> 192.168.123.100:0/1207215627 conn(0x7f5110087390 msgr2=0x7f51100570b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:28.001 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.003+0000 7f510f577640 1 -- 192.168.123.100:0/1207215627 shutdown_connections 2026-03-20T11:45:28.001 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.003+0000 7f510f577640 1 -- 192.168.123.100:0/1207215627 wait complete. 2026-03-20T11:45:28.001 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.004+0000 7f510f577640 1 Processor -- start 2026-03-20T11:45:28.002 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.004+0000 7f510f577640 1 -- start start 2026-03-20T11:45:28.002 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.004+0000 7f510f577640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5110058da0 0x7f5110116ff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:28.002 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.004+0000 7f510f577640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f5110174970 con 0x7f5110058da0 2026-03-20T11:45:28.004 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.006+0000 7f510e575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5110058da0 0x7f5110116ff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:28.004 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.006+0000 7f510e575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5110058da0 0x7f5110116ff0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:34926/0 (socket says 192.168.123.100:34926) 2026-03-20T11:45:28.004 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.006+0000 7f510e575640 1 -- 192.168.123.100:0/1141168716 learned_addr learned my addr 192.168.123.100:0/1141168716 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:28.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.008+0000 7f510e575640 1 -- 192.168.123.100:0/1141168716 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f511010ce50 con 0x7f5110058da0 2026-03-20T11:45:28.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.008+0000 7f510e575640 1 --2- 192.168.123.100:0/1141168716 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5110058da0 0x7f5110116ff0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f50f8009880 tx=0x7f50f80047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:28.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.008+0000 7f50f6ffd640 1 -- 192.168.123.100:0/1141168716 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f50f8046070 con 0x7f5110058da0 2026-03-20T11:45:28.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.008+0000 7f50f6ffd640 1 -- 192.168.123.100:0/1141168716 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f50f8037ce0 con 0x7f5110058da0 2026-03-20T11:45:28.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.008+0000 7f50f6ffd640 1 -- 192.168.123.100:0/1141168716 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f50f803c040 con 0x7f5110058da0 2026-03-20T11:45:28.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.008+0000 7f510f577640 1 -- 192.168.123.100:0/1141168716 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f511010cb20 con 0x7f5110058da0 2026-03-20T11:45:28.006 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.008+0000 7f510f577640 1 -- 192.168.123.100:0/1141168716 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f5110117530 con 0x7f5110058da0 2026-03-20T11:45:28.007 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.009+0000 7f50f6ffd640 1 -- 192.168.123.100:0/1141168716 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 939+0+0 (secure 0 0 0) 0x7f50f8053020 con 0x7f5110058da0 2026-03-20T11:45:28.007 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.009+0000 7f50f6ffd640 1 -- 192.168.123.100:0/1141168716 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(5..5 src has 1..5) ==== 1065+0+0 (secure 0 0 0) 0x7f50f8042df0 con 0x7f5110058da0 2026-03-20T11:45:28.007 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.009+0000 7f50f4ff9640 1 -- 192.168.123.100:0/1141168716 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f50cc005180 con 0x7f5110058da0 2026-03-20T11:45:28.008 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.010+0000 7f50f6ffd640 1 -- 192.168.123.100:0/1141168716 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+77519 (secure 0 0 0) 0x7f50f80403c0 con 0x7f5110058da0 2026-03-20T11:45:28.031 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T11:45:28.033+0000 7fdebd197900 -1 Falling back to public interface 2026-03-20T11:45:28.039 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T11:45:28.036+0000 7efd0c1da900 -1 Falling back to public interface 2026-03-20T11:45:28.043 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.045+0000 7f50f4ff9640 1 -- 192.168.123.100:0/1141168716 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f50cc005740 con 0x7f5110058da0 2026-03-20T11:45:28.044 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.046+0000 7f50f6ffd640 1 -- 192.168.123.100:0/1141168716 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v5) ==== 74+0+3425 (secure 0 0 0) 0x7f50f80405a0 con 0x7f5110058da0 2026-03-20T11:45:28.046 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:28.046 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":5,"fsid":"d2998f34-0acb-4cf3-b295-d778019a8c29","created":"2026-03-20T11:45:27.023905+0000","modified":"2026-03-20T11:45:27.848362+0000","last_up_change":"0.000000","last_in_change":"2026-03-20T11:45:27.848362+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"232f165d-e880-471c-ad41-9cbb77b50aed","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"59a8c5e0-6c84-431b-ac69-a2f3326598f8","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"3e2deeca-bacd-4ce3-abce-84b4e72b511b","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T11:45:28.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.049+0000 7f50f4ff9640 1 -- 192.168.123.100:0/1141168716 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5110058da0 msgr2=0x7f5110116ff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:28.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.049+0000 7f50f4ff9640 1 --2- 192.168.123.100:0/1141168716 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5110058da0 0x7f5110116ff0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f50f8009880 tx=0x7f50f80047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:28.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.049+0000 7f50f4ff9640 1 -- 192.168.123.100:0/1141168716 shutdown_connections 2026-03-20T11:45:28.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.049+0000 7f50f4ff9640 1 --2- 192.168.123.100:0/1141168716 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5110058da0 0x7f5110116ff0 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:28.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.049+0000 7f50f4ff9640 1 -- 192.168.123.100:0/1141168716 >> 192.168.123.100:0/1141168716 conn(0x7f5110087390 msgr2=0x7f511007b2d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:28.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.050+0000 7f50f4ff9640 1 -- 192.168.123.100:0/1141168716 shutdown_connections 2026-03-20T11:45:28.050 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T11:45:28.052+0000 7f61730ad900 -1 Falling back to public interface 2026-03-20T11:45:28.051 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:28.053+0000 7f50f4ff9640 1 -- 192.168.123.100:0/1141168716 wait complete. 2026-03-20T11:45:28.061 INFO:tasks.ceph.ceph_manager.ceph:[] 2026-03-20T11:45:28.061 INFO:tasks.ceph:Waiting for OSDs to come up 2026-03-20T11:45:28.085 INFO:tasks.ceph.mgr.0.vm00.stderr:/usr/lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-20T11:45:28.085 INFO:tasks.ceph.mgr.0.vm00.stderr:Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-20T11:45:28.085 INFO:tasks.ceph.mgr.0.vm00.stderr: from numpy import show_config as show_numpy_config 2026-03-20T11:45:28.172 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T11:45:28.173+0000 7fdebd197900 -1 osd.1 0 log_to_monitors true 2026-03-20T11:45:28.194 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T11:45:28.196+0000 7f61730ad900 -1 osd.2 0 log_to_monitors true 2026-03-20T11:45:28.201 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T11:45:28.202+0000 7efd0c1da900 -1 osd.0 0 log_to_monitors true 2026-03-20T11:45:28.362 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-03-20T11:45:28.434 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.435+0000 7f6f356c7640 1 Processor -- start 2026-03-20T11:45:28.434 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.435+0000 7f6f356c7640 1 -- start start 2026-03-20T11:45:28.434 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.436+0000 7f6f356c7640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f6f30058330 0x7f6f30058700 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:28.434 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.436+0000 7f6f356c7640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f6f3005a670 con 0x7f6f30058c40 2026-03-20T11:45:28.434 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.436+0000 7f6f356c7640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f6f3005a980 con 0x7f6f30058330 2026-03-20T11:45:28.434 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.436+0000 7f6f2effd640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f6f30058330 0x7f6f30058700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:28.434 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.436+0000 7f6f2e7fc640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7f6f30058c40 0x7f6f30172a60 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:34774/0 (socket says 192.168.123.100:34774) 2026-03-20T11:45:28.434 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.436+0000 7f6f2e7fc640 1 -- 192.168.123.100:0/580002290 learned_addr learned my addr 192.168.123.100:0/580002290 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:28.434 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.436+0000 7f6f2effd640 1 -- 192.168.123.100:0/580002290 >> v1:192.168.123.100:6789/0 conn(0x7f6f30058c40 legacy=0x7f6f30172a60 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:28.434 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.436+0000 7f6f2effd640 1 -- 192.168.123.100:0/580002290 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6f30175190 con 0x7f6f30058330 2026-03-20T11:45:28.435 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.437+0000 7f6f2effd640 1 --2- 192.168.123.100:0/580002290 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6f30058330 0x7f6f30058700 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f6f1c004770 tx=0x7f6f1c02eda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=7567217e35f52abd server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:28.435 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.437+0000 7f6f2dffb640 1 -- 192.168.123.100:0/580002290 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6f1c03c070 con 0x7f6f30058330 2026-03-20T11:45:28.435 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.437+0000 7f6f2dffb640 1 -- 192.168.123.100:0/580002290 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f6f1c02f9e0 con 0x7f6f30058330 2026-03-20T11:45:28.435 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.437+0000 7f6f2dffb640 1 -- 192.168.123.100:0/580002290 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6f1c02fce0 con 0x7f6f30058330 2026-03-20T11:45:28.436 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.438+0000 7f6f356c7640 1 -- 192.168.123.100:0/580002290 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6f30058330 msgr2=0x7f6f30058700 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:28.436 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.438+0000 7f6f356c7640 1 --2- 192.168.123.100:0/580002290 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6f30058330 0x7f6f30058700 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f6f1c004770 tx=0x7f6f1c02eda0 comp rx=0 tx=0).stop 2026-03-20T11:45:28.436 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.438+0000 7f6f356c7640 1 -- 192.168.123.100:0/580002290 shutdown_connections 2026-03-20T11:45:28.436 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.438+0000 7f6f356c7640 1 --2- 192.168.123.100:0/580002290 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6f30058330 0x7f6f30058700 unknown :-1 s=CLOSED pgs=32 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:28.436 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.438+0000 7f6f356c7640 1 -- 192.168.123.100:0/580002290 >> 192.168.123.100:0/580002290 conn(0x7f6f300828c0 msgr2=0x7f6f30082cc0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:28.436 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.438+0000 7f6f356c7640 1 -- 192.168.123.100:0/580002290 shutdown_connections 2026-03-20T11:45:28.436 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.438+0000 7f6f356c7640 1 -- 192.168.123.100:0/580002290 wait complete. 2026-03-20T11:45:28.437 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.439+0000 7f6f356c7640 1 Processor -- start 2026-03-20T11:45:28.437 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.439+0000 7f6f356c7640 1 -- start start 2026-03-20T11:45:28.437 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.439+0000 7f6f356c7640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6f30058330 0x7f6f30141ab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:28.437 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.439+0000 7f6f356c7640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6f30174510 con 0x7f6f30058330 2026-03-20T11:45:28.437 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.439+0000 7f6f2effd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6f30058330 0x7f6f30141ab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:28.437 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.439+0000 7f6f2effd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6f30058330 0x7f6f30141ab0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:34980/0 (socket says 192.168.123.100:34980) 2026-03-20T11:45:28.437 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.439+0000 7f6f2effd640 1 -- 192.168.123.100:0/1451001371 learned_addr learned my addr 192.168.123.100:0/1451001371 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:28.437 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.439+0000 7f6f2effd640 1 -- 192.168.123.100:0/1451001371 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6f3012c960 con 0x7f6f30058330 2026-03-20T11:45:28.438 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.440+0000 7f6f2effd640 1 --2- 192.168.123.100:0/1451001371 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6f30058330 0x7f6f30141ab0 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f6f1c009130 tx=0x7f6f1c0047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:28.438 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.440+0000 7f6f137fe640 1 -- 192.168.123.100:0/1451001371 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6f1c03c040 con 0x7f6f30058330 2026-03-20T11:45:28.438 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.440+0000 7f6f137fe640 1 -- 192.168.123.100:0/1451001371 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f6f1c004030 con 0x7f6f30058330 2026-03-20T11:45:28.438 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.440+0000 7f6f356c7640 1 -- 192.168.123.100:0/1451001371 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6f3012bb20 con 0x7f6f30058330 2026-03-20T11:45:28.438 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.440+0000 7f6f356c7640 1 -- 192.168.123.100:0/1451001371 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6f30141520 con 0x7f6f30058330 2026-03-20T11:45:28.438 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.440+0000 7f6f137fe640 1 -- 192.168.123.100:0/1451001371 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6f1c004310 con 0x7f6f30058330 2026-03-20T11:45:28.439 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.441+0000 7f6f137fe640 1 -- 192.168.123.100:0/1451001371 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 1) ==== 939+0+0 (secure 0 0 0) 0x7f6f1c050020 con 0x7f6f30058330 2026-03-20T11:45:28.439 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.441+0000 7f6f137fe640 1 -- 192.168.123.100:0/1451001371 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(5..5 src has 1..5) ==== 1065+0+0 (secure 0 0 0) 0x7f6f1c03f370 con 0x7f6f30058330 2026-03-20T11:45:28.439 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.441+0000 7f6f356c7640 1 -- 192.168.123.100:0/1451001371 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6f3012c430 con 0x7f6f30058330 2026-03-20T11:45:28.443 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.443+0000 7f6f137fe640 1 -- 192.168.123.100:0/1451001371 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+77519 (secure 0 0 0) 0x7f6f3012c430 con 0x7f6f30058330 2026-03-20T11:45:28.480 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.482+0000 7f6f356c7640 1 -- 192.168.123.100:0/1451001371 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f6f30058700 con 0x7f6f30058330 2026-03-20T11:45:28.480 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.482+0000 7f6f137fe640 1 -- 192.168.123.100:0/1451001371 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v5) ==== 74+0+3425 (secure 0 0 0) 0x7f6f1c044020 con 0x7f6f30058330 2026-03-20T11:45:28.480 INFO:teuthology.misc.health.vm00.stdout: 2026-03-20T11:45:28.480 INFO:teuthology.misc.health.vm00.stdout:{"epoch":5,"fsid":"d2998f34-0acb-4cf3-b295-d778019a8c29","created":"2026-03-20T11:45:27.023905+0000","modified":"2026-03-20T11:45:27.848362+0000","last_up_change":"0.000000","last_in_change":"2026-03-20T11:45:27.848362+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":2,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":0,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[],"osds":[{"osd":0,"uuid":"232f165d-e880-471c-ad41-9cbb77b50aed","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":1,"uuid":"59a8c5e0-6c84-431b-ac69-a2f3326598f8","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]},{"osd":2,"uuid":"3e2deeca-bacd-4ce3-abce-84b4e72b511b","up":0,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":0,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[]},"cluster_addrs":{"addrvec":[]},"heartbeat_back_addrs":{"addrvec":[]},"heartbeat_front_addrs":{"addrvec":[]},"public_addr":"(unrecognized address family 0)/0","cluster_addr":"(unrecognized address family 0)/0","heartbeat_back_addr":"(unrecognized address family 0)/0","heartbeat_front_addr":"(unrecognized address family 0)/0","state":["exists","new"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":0,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T11:45:28.481 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.483+0000 7f6f356c7640 1 -- 192.168.123.100:0/1451001371 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6f30058330 msgr2=0x7f6f30141ab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:28.481 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.483+0000 7f6f356c7640 1 --2- 192.168.123.100:0/1451001371 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6f30058330 0x7f6f30141ab0 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f6f1c009130 tx=0x7f6f1c0047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:28.481 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.483+0000 7f6f356c7640 1 -- 192.168.123.100:0/1451001371 shutdown_connections 2026-03-20T11:45:28.481 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.483+0000 7f6f356c7640 1 --2- 192.168.123.100:0/1451001371 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6f30058330 0x7f6f30141ab0 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:28.481 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.483+0000 7f6f356c7640 1 -- 192.168.123.100:0/1451001371 >> 192.168.123.100:0/1451001371 conn(0x7f6f300828c0 msgr2=0x7f6f3005c390 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:28.483 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.485+0000 7f6f356c7640 1 -- 192.168.123.100:0/1451001371 shutdown_connections 2026-03-20T11:45:28.483 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:28.485+0000 7f6f356c7640 1 -- 192.168.123.100:0/1451001371 wait complete. 2026-03-20T11:45:28.493 DEBUG:teuthology.misc:0 of 3 OSDs are up 2026-03-20T11:45:30.035 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T11:45:30.037+0000 7f616f03c640 -1 osd.2 0 waiting for initial osdmap 2026-03-20T11:45:30.035 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T11:45:30.037+0000 7fdeb9126640 -1 osd.1 0 waiting for initial osdmap 2026-03-20T11:45:30.035 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T11:45:30.037+0000 7efd08169640 -1 osd.0 0 waiting for initial osdmap 2026-03-20T11:45:30.047 INFO:tasks.ceph.osd.2.vm00.stderr:2026-03-20T11:45:30.049+0000 7f6169e41640 -1 osd.2 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T11:45:30.047 INFO:tasks.ceph.osd.1.vm00.stderr:2026-03-20T11:45:30.049+0000 7fdeb3f2b640 -1 osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T11:45:30.048 INFO:tasks.ceph.osd.0.vm00.stderr:2026-03-20T11:45:30.050+0000 7efd02f6e640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory 2026-03-20T11:45:30.639 INFO:tasks.ceph.mgr.0.vm00.stderr:2026-03-20T11:45:30.640+0000 7f5a9c028640 -1 mgr.server handle_report got status from non-daemon mon.a 2026-03-20T11:45:34.795 DEBUG:teuthology.orchestra.run.vm00:> adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph osd dump --format=json 2026-03-20T11:45:34.857 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.858+0000 7f20486f0640 1 Processor -- start 2026-03-20T11:45:34.857 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.858+0000 7f20486f0640 1 -- start start 2026-03-20T11:45:34.857 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.858+0000 7f20486f0640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f20400585d0 0x7f20400589a0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:34.857 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.859+0000 7f20486f0640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f204005a8b0 con 0x7f20400580c0 2026-03-20T11:45:34.857 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.859+0000 7f20486f0640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f204005a070 con 0x7f20400585d0 2026-03-20T11:45:34.857 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.859+0000 7f2046465640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f20400585d0 0x7f20400589a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:34.857 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.859+0000 7f2045c64640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7f20400580c0 0x7f204012f090 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:38458/0 (socket says 192.168.123.100:38458) 2026-03-20T11:45:34.857 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.859+0000 7f2045c64640 1 -- 192.168.123.100:0/1901565694 learned_addr learned my addr 192.168.123.100:0/1901565694 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:34.857 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.859+0000 7f2045463640 1 -- 192.168.123.100:0/1901565694 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3011397858 0 0) 0x7f204005a8b0 con 0x7f20400580c0 2026-03-20T11:45:34.857 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.859+0000 7f2045463640 1 -- 192.168.123.100:0/1901565694 --> v1:192.168.123.100:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2028003610 con 0x7f20400580c0 2026-03-20T11:45:34.857 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.859+0000 7f2046465640 1 -- 192.168.123.100:0/1901565694 >> v1:192.168.123.100:6789/0 conn(0x7f20400580c0 legacy=0x7f204012f090 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:34.857 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.859+0000 7f2046465640 1 -- 192.168.123.100:0/1901565694 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2040058ee0 con 0x7f20400585d0 2026-03-20T11:45:34.858 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.860+0000 7f2046465640 1 --2- 192.168.123.100:0/1901565694 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20400585d0 0x7f20400589a0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f2030004770 tx=0x7f203002eda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=351dd10b9c98099b server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:34.858 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.860+0000 7f2045463640 1 -- 192.168.123.100:0/1901565694 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f203003c070 con 0x7f20400585d0 2026-03-20T11:45:34.858 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.860+0000 7f2045463640 1 -- 192.168.123.100:0/1901565694 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f203002f9e0 con 0x7f20400585d0 2026-03-20T11:45:34.858 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.860+0000 7f2045463640 1 -- 192.168.123.100:0/1901565694 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f203002fce0 con 0x7f20400585d0 2026-03-20T11:45:34.858 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.860+0000 7f20486f0640 1 -- 192.168.123.100:0/1901565694 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20400585d0 msgr2=0x7f20400589a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:34.858 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.860+0000 7f20486f0640 1 --2- 192.168.123.100:0/1901565694 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20400585d0 0x7f20400589a0 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f2030004770 tx=0x7f203002eda0 comp rx=0 tx=0).stop 2026-03-20T11:45:34.859 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.861+0000 7f20486f0640 1 -- 192.168.123.100:0/1901565694 shutdown_connections 2026-03-20T11:45:34.859 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.861+0000 7f20486f0640 1 --2- 192.168.123.100:0/1901565694 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20400585d0 0x7f20400589a0 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:34.859 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.861+0000 7f20486f0640 1 -- 192.168.123.100:0/1901565694 >> 192.168.123.100:0/1901565694 conn(0x7f2040087020 msgr2=0x7f2040087420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:34.859 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.861+0000 7f20486f0640 1 -- 192.168.123.100:0/1901565694 shutdown_connections 2026-03-20T11:45:34.859 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.861+0000 7f20486f0640 1 -- 192.168.123.100:0/1901565694 wait complete. 2026-03-20T11:45:34.859 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.861+0000 7f20486f0640 1 Processor -- start 2026-03-20T11:45:34.859 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.861+0000 7f20486f0640 1 -- start start 2026-03-20T11:45:34.860 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.862+0000 7f20486f0640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20400580c0 0x7f20401429f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:34.860 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.862+0000 7f20486f0640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f204012fb30 con 0x7f20400580c0 2026-03-20T11:45:34.860 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.862+0000 7f2046465640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20400580c0 0x7f20401429f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:34.860 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.862+0000 7f2046465640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20400580c0 0x7f20401429f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:33526/0 (socket says 192.168.123.100:33526) 2026-03-20T11:45:34.860 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.862+0000 7f2046465640 1 -- 192.168.123.100:0/716913244 learned_addr learned my addr 192.168.123.100:0/716913244 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:34.860 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.862+0000 7f2046465640 1 -- 192.168.123.100:0/716913244 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f204007f740 con 0x7f20400580c0 2026-03-20T11:45:34.860 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.862+0000 7f2046465640 1 --2- 192.168.123.100:0/716913244 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20400580c0 0x7f20401429f0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f203002ede0 tx=0x7f20300047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:34.860 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.862+0000 7f2026ffd640 1 -- 192.168.123.100:0/716913244 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2030046070 con 0x7f20400580c0 2026-03-20T11:45:34.861 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.862+0000 7f2026ffd640 1 -- 192.168.123.100:0/716913244 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f2030037bc0 con 0x7f20400580c0 2026-03-20T11:45:34.861 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.863+0000 7f2026ffd640 1 -- 192.168.123.100:0/716913244 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f203003c040 con 0x7f20400580c0 2026-03-20T11:45:34.861 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.863+0000 7f20486f0640 1 -- 192.168.123.100:0/716913244 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f204007fd00 con 0x7f20400580c0 2026-03-20T11:45:34.861 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.863+0000 7f20486f0640 1 -- 192.168.123.100:0/716913244 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f2040143110 con 0x7f20400580c0 2026-03-20T11:45:34.861 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.863+0000 7f20486f0640 1 -- 192.168.123.100:0/716913244 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f204007e850 con 0x7f20400580c0 2026-03-20T11:45:34.864 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.866+0000 7f2026ffd640 1 -- 192.168.123.100:0/716913244 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f2030052020 con 0x7f20400580c0 2026-03-20T11:45:34.864 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.866+0000 7f2026ffd640 1 --2- 192.168.123.100:0/716913244 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f201003dc50 0x7f201005e100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:34.864 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.866+0000 7f2026ffd640 1 -- 192.168.123.100:0/716913244 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(11..11 src has 1..11) ==== 2575+0+0 (secure 0 0 0) 0x7f2030076ec0 con 0x7f20400580c0 2026-03-20T11:45:34.865 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.867+0000 7f2026ffd640 1 -- 192.168.123.100:0/716913244 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f2030004020 con 0x7f20400580c0 2026-03-20T11:45:34.865 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.867+0000 7f2045c64640 1 --2- 192.168.123.100:0/716913244 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f201003dc50 0x7f201005e100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:34.865 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.867+0000 7f2045c64640 1 --2- 192.168.123.100:0/716913244 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f201003dc50 0x7f201005e100 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f20400584b0 tx=0x7f2034002990 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:34.981 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.983+0000 7f20486f0640 1 -- 192.168.123.100:0/716913244 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f2040038120 con 0x7f20400580c0 2026-03-20T11:45:34.981 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.983+0000 7f2026ffd640 1 -- 192.168.123.100:0/716913244 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v11) ==== 74+0+6864 (secure 0 0 0) 0x7f203004f680 con 0x7f20400580c0 2026-03-20T11:45:34.981 INFO:teuthology.misc.health.vm00.stdout: 2026-03-20T11:45:34.981 INFO:teuthology.misc.health.vm00.stdout:{"epoch":11,"fsid":"d2998f34-0acb-4cf3-b295-d778019a8c29","created":"2026-03-20T11:45:27.023905+0000","modified":"2026-03-20T11:45:34.053936+0000","last_up_change":"2026-03-20T11:45:31.038830+0000","last_in_change":"2026-03-20T11:45:27.848362+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-20T11:45:31.684065+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"11","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.9900000095367432,"score_stable":2.9900000095367432,"optimal_score":0.67000001668930054,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"232f165d-e880-471c-ad41-9cbb77b50aed","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":1162726296},{"type":"v1","addr":"192.168.123.100:6809","nonce":1162726296}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":1162726296},{"type":"v1","addr":"192.168.123.100:6811","nonce":1162726296}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":1162726296},{"type":"v1","addr":"192.168.123.100:6815","nonce":1162726296}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":1162726296},{"type":"v1","addr":"192.168.123.100:6813","nonce":1162726296}]},"public_addr":"192.168.123.100:6809/1162726296","cluster_addr":"192.168.123.100:6811/1162726296","heartbeat_back_addr":"192.168.123.100:6815/1162726296","heartbeat_front_addr":"192.168.123.100:6813/1162726296","state":["exists","up"]},{"osd":1,"uuid":"59a8c5e0-6c84-431b-ac69-a2f3326598f8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":9,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":3952598619},{"type":"v1","addr":"192.168.123.100:6801","nonce":3952598619}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":3952598619},{"type":"v1","addr":"192.168.123.100:6803","nonce":3952598619}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":3952598619},{"type":"v1","addr":"192.168.123.100:6807","nonce":3952598619}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":3952598619},{"type":"v1","addr":"192.168.123.100:6805","nonce":3952598619}]},"public_addr":"192.168.123.100:6801/3952598619","cluster_addr":"192.168.123.100:6803/3952598619","heartbeat_back_addr":"192.168.123.100:6807/3952598619","heartbeat_front_addr":"192.168.123.100:6805/3952598619","state":["exists","up"]},{"osd":2,"uuid":"3e2deeca-bacd-4ce3-abce-84b4e72b511b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":2144187382},{"type":"v1","addr":"192.168.123.100:6817","nonce":2144187382}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":2144187382},{"type":"v1","addr":"192.168.123.100:6819","nonce":2144187382}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":2144187382},{"type":"v1","addr":"192.168.123.100:6823","nonce":2144187382}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":2144187382},{"type":"v1","addr":"192.168.123.100:6821","nonce":2144187382}]},"public_addr":"192.168.123.100:6817/2144187382","cluster_addr":"192.168.123.100:6819/2144187382","heartbeat_back_addr":"192.168.123.100:6823/2144187382","heartbeat_front_addr":"192.168.123.100:6821/2144187382","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T11:45:34.984 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.986+0000 7f20486f0640 1 -- 192.168.123.100:0/716913244 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f201003dc50 msgr2=0x7f201005e100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:34.984 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.986+0000 7f20486f0640 1 --2- 192.168.123.100:0/716913244 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f201003dc50 0x7f201005e100 secure :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0x7f20400584b0 tx=0x7f2034002990 comp rx=0 tx=0).stop 2026-03-20T11:45:34.984 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.986+0000 7f20486f0640 1 -- 192.168.123.100:0/716913244 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20400580c0 msgr2=0x7f20401429f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:34.984 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.986+0000 7f20486f0640 1 --2- 192.168.123.100:0/716913244 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20400580c0 0x7f20401429f0 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x7f203002ede0 tx=0x7f20300047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:34.984 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.986+0000 7f20486f0640 1 -- 192.168.123.100:0/716913244 shutdown_connections 2026-03-20T11:45:34.984 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.986+0000 7f20486f0640 1 --2- 192.168.123.100:0/716913244 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f201003dc50 0x7f201005e100 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:34.984 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.986+0000 7f20486f0640 1 --2- 192.168.123.100:0/716913244 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f20400580c0 0x7f20401429f0 unknown :-1 s=CLOSED pgs=46 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:34.985 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.986+0000 7f20486f0640 1 -- 192.168.123.100:0/716913244 >> 192.168.123.100:0/716913244 conn(0x7f2040087020 msgr2=0x7f204012bf10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:34.985 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.986+0000 7f20486f0640 1 -- 192.168.123.100:0/716913244 shutdown_connections 2026-03-20T11:45:34.985 INFO:teuthology.misc.health.vm00.stderr:2026-03-20T11:45:34.986+0000 7f20486f0640 1 -- 192.168.123.100:0/716913244 wait complete. 2026-03-20T11:45:34.991 DEBUG:teuthology.misc:3 of 3 OSDs are up 2026-03-20T11:45:34.992 INFO:tasks.ceph:Creating RBD pool 2026-03-20T11:45:34.992 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph --cluster ceph osd pool create rbd 8 2026-03-20T11:45:35.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.063+0000 7f72010c0640 1 Processor -- start 2026-03-20T11:45:35.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.063+0000 7f72010c0640 1 -- start start 2026-03-20T11:45:35.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.063+0000 7f72010c0640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f71fc153ad0 0x7f71fc173eb0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:35.061 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.063+0000 7f72010c0640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f71fc05a9c0 con 0x7f71fc058330 2026-03-20T11:45:35.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.063+0000 7f72010c0640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f71fc05a0f0 con 0x7f71fc153ad0 2026-03-20T11:45:35.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.063+0000 7f71fa575640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f71fc153ad0 0x7f71fc173eb0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:35.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.063+0000 7f71fa575640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f71fc153ad0 0x7f71fc173eb0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:33538/0 (socket says 192.168.123.100:33538) 2026-03-20T11:45:35.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.063+0000 7f71fa575640 1 -- 192.168.123.100:0/4102796583 learned_addr learned my addr 192.168.123.100:0/4102796583 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:35.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.064+0000 7f71fa575640 1 -- 192.168.123.100:0/4102796583 >> v1:192.168.123.100:6789/0 conn(0x7f71fc058330 legacy=0x7f71fc058700 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:35.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.064+0000 7f71fa575640 1 -- 192.168.123.100:0/4102796583 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f71fc05a6b0 con 0x7f71fc153ad0 2026-03-20T11:45:35.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.064+0000 7f71fa575640 1 --2- 192.168.123.100:0/4102796583 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f71fc153ad0 0x7f71fc173eb0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f71e4009080 tx=0x7f71e402ef00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b0cf463dacde88d4 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:35.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.064+0000 7f71f9d74640 1 -- 192.168.123.100:0/4102796583 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f71e403c070 con 0x7f71fc153ad0 2026-03-20T11:45:35.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.064+0000 7f71f9d74640 1 -- 192.168.123.100:0/4102796583 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f71e402fb40 con 0x7f71fc153ad0 2026-03-20T11:45:35.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.064+0000 7f71f9d74640 1 -- 192.168.123.100:0/4102796583 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f71e402fe40 con 0x7f71fc153ad0 2026-03-20T11:45:35.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.064+0000 7f72010c0640 1 -- 192.168.123.100:0/4102796583 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f71fc153ad0 msgr2=0x7f71fc173eb0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:35.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.064+0000 7f72010c0640 1 --2- 192.168.123.100:0/4102796583 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f71fc153ad0 0x7f71fc173eb0 secure :-1 s=READY pgs=48 cs=0 l=1 rev1=1 crypto rx=0x7f71e4009080 tx=0x7f71e402ef00 comp rx=0 tx=0).stop 2026-03-20T11:45:35.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.064+0000 7f72010c0640 1 -- 192.168.123.100:0/4102796583 shutdown_connections 2026-03-20T11:45:35.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.065+0000 7f72010c0640 1 --2- 192.168.123.100:0/4102796583 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f71fc153ad0 0x7f71fc173eb0 unknown :-1 s=CLOSED pgs=48 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:35.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.065+0000 7f72010c0640 1 -- 192.168.123.100:0/4102796583 >> 192.168.123.100:0/4102796583 conn(0x7f71fc082930 msgr2=0x7f71fc082d30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:35.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.065+0000 7f72010c0640 1 -- 192.168.123.100:0/4102796583 shutdown_connections 2026-03-20T11:45:35.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.065+0000 7f72010c0640 1 -- 192.168.123.100:0/4102796583 wait complete. 2026-03-20T11:45:35.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.065+0000 7f72010c0640 1 Processor -- start 2026-03-20T11:45:35.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.065+0000 7f72010c0640 1 -- start start 2026-03-20T11:45:35.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.065+0000 7f72010c0640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f71fc058330 0x7f71fc173ab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:35.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.065+0000 7f72010c0640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f71fc174570 con 0x7f71fc058330 2026-03-20T11:45:35.063 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.065+0000 7f71fad76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f71fc058330 0x7f71fc173ab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:35.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.066+0000 7f71fad76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f71fc058330 0x7f71fc173ab0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:33552/0 (socket says 192.168.123.100:33552) 2026-03-20T11:45:35.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.066+0000 7f71fad76640 1 -- 192.168.123.100:0/2338190248 learned_addr learned my addr 192.168.123.100:0/2338190248 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:35.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.066+0000 7f71fad76640 1 -- 192.168.123.100:0/2338190248 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f71fc158ee0 con 0x7f71fc058330 2026-03-20T11:45:35.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.066+0000 7f71fad76640 1 --2- 192.168.123.100:0/2338190248 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f71fc058330 0x7f71fc173ab0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f71f000c9f0 tx=0x7f71f000cec0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:35.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.066+0000 7f71db7fe640 1 -- 192.168.123.100:0/2338190248 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f71f0003b90 con 0x7f71fc058330 2026-03-20T11:45:35.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.066+0000 7f71db7fe640 1 -- 192.168.123.100:0/2338190248 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f71f0003d30 con 0x7f71fc058330 2026-03-20T11:45:35.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.066+0000 7f72010c0640 1 -- 192.168.123.100:0/2338190248 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f71fc158bb0 con 0x7f71fc058330 2026-03-20T11:45:35.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.066+0000 7f72010c0640 1 -- 192.168.123.100:0/2338190248 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f71fc1597a0 con 0x7f71fc058330 2026-03-20T11:45:35.064 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.066+0000 7f71db7fe640 1 -- 192.168.123.100:0/2338190248 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f71f000a950 con 0x7f71fc058330 2026-03-20T11:45:35.065 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.067+0000 7f72010c0640 1 -- 192.168.123.100:0/2338190248 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f71fc058700 con 0x7f71fc058330 2026-03-20T11:45:35.065 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.067+0000 7f71db7fe640 1 -- 192.168.123.100:0/2338190248 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f71f000aaf0 con 0x7f71fc058330 2026-03-20T11:45:35.065 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.067+0000 7f71db7fe640 1 --2- 192.168.123.100:0/2338190248 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f71c803dc50 0x7f71c805e100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:35.065 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.067+0000 7f71db7fe640 1 -- 192.168.123.100:0/2338190248 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(11..11 src has 1..11) ==== 2575+0+0 (secure 0 0 0) 0x7f71f0051e00 con 0x7f71fc058330 2026-03-20T11:45:35.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.068+0000 7f71fa575640 1 --2- 192.168.123.100:0/2338190248 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f71c803dc50 0x7f71c805e100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:35.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.068+0000 7f71fa575640 1 --2- 192.168.123.100:0/2338190248 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f71c803dc50 0x7f71c805e100 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f71e4004770 tx=0x7f71e4033000 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:35.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.069+0000 7f71db7fe640 1 -- 192.168.123.100:0/2338190248 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f71f0015b90 con 0x7f71fc058330 2026-03-20T11:45:35.183 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:35.185+0000 7f72010c0640 1 -- 192.168.123.100:0/2338190248 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "rbd", "pg_num": 8} v 0) -- 0x7f71fc158660 con 0x7f71fc058330 2026-03-20T11:45:36.060 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.062+0000 7f71db7fe640 1 -- 192.168.123.100:0/2338190248 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "rbd", "pg_num": 8}]=0 pool 'rbd' created v12) ==== 109+0+0 (secure 0 0 0) 0x7f71f000ae10 con 0x7f71fc058330 2026-03-20T11:45:36.061 INFO:teuthology.orchestra.run.vm00.stderr:pool 'rbd' created 2026-03-20T11:45:36.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.068+0000 7f72010c0640 1 -- 192.168.123.100:0/2338190248 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f71c803dc50 msgr2=0x7f71c805e100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:36.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.068+0000 7f72010c0640 1 --2- 192.168.123.100:0/2338190248 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f71c803dc50 0x7f71c805e100 secure :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0x7f71e4004770 tx=0x7f71e4033000 comp rx=0 tx=0).stop 2026-03-20T11:45:36.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.068+0000 7f72010c0640 1 -- 192.168.123.100:0/2338190248 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f71fc058330 msgr2=0x7f71fc173ab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:36.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.068+0000 7f72010c0640 1 --2- 192.168.123.100:0/2338190248 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f71fc058330 0x7f71fc173ab0 secure :-1 s=READY pgs=49 cs=0 l=1 rev1=1 crypto rx=0x7f71f000c9f0 tx=0x7f71f000cec0 comp rx=0 tx=0).stop 2026-03-20T11:45:36.066 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.068+0000 7f72010c0640 1 -- 192.168.123.100:0/2338190248 shutdown_connections 2026-03-20T11:45:36.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.068+0000 7f72010c0640 1 --2- 192.168.123.100:0/2338190248 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f71c803dc50 0x7f71c805e100 unknown :-1 s=CLOSED pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:36.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.068+0000 7f72010c0640 1 --2- 192.168.123.100:0/2338190248 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f71fc058330 0x7f71fc173ab0 unknown :-1 s=CLOSED pgs=49 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:36.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.068+0000 7f72010c0640 1 -- 192.168.123.100:0/2338190248 >> 192.168.123.100:0/2338190248 conn(0x7f71fc082930 msgr2=0x7f71fc05c3d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:36.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.068+0000 7f72010c0640 1 -- 192.168.123.100:0/2338190248 shutdown_connections 2026-03-20T11:45:36.067 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.068+0000 7f72010c0640 1 -- 192.168.123.100:0/2338190248 wait complete. 2026-03-20T11:45:36.075 DEBUG:teuthology.orchestra.run.vm00:> rbd --cluster ceph pool init rbd 2026-03-20T11:45:36.108 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.110+0000 7f0fbd1c6300 1 Processor -- start 2026-03-20T11:45:36.108 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.110+0000 7f0fbd1c6300 1 -- start start 2026-03-20T11:45:36.108 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.110+0000 7f0fbd1c6300 1 --2- >> v2:192.168.123.100:3300/0 conn(0x55f5402c3040 0x55f5402c2820 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:36.108 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.110+0000 7f0fbd1c6300 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x55f5400636c0 con 0x55f5402c3410 2026-03-20T11:45:36.108 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.110+0000 7f0fbd1c6300 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x55f54008c3b0 con 0x55f5402c3040 2026-03-20T11:45:36.108 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.110+0000 7f0fbbc7b640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x55f5402c3040 0x55f5402c2820 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:36.108 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.110+0000 7f0fbbc7b640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x55f5402c3040 0x55f5402c2820 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:33568/0 (socket says 192.168.123.100:33568) 2026-03-20T11:45:36.108 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.110+0000 7f0fbbc7b640 1 -- 192.168.123.100:0/573975739 learned_addr learned my addr 192.168.123.100:0/573975739 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:36.109 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.110+0000 7f0fbbc7b640 1 -- 192.168.123.100:0/573975739 >> v1:192.168.123.100:6789/0 conn(0x55f5402c3410 legacy=0x55f5402c5da0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:36.109 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.110+0000 7f0fbbc7b640 1 -- 192.168.123.100:0/573975739 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x55f5400738a0 con 0x55f5402c3040 2026-03-20T11:45:36.109 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.111+0000 7f0fbbc7b640 1 --2- 192.168.123.100:0/573975739 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55f5402c3040 0x55f5402c2820 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f0fb0005820 tx=0x7f0fb0058bd0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=4700c5406b2a5593 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:36.110 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.112+0000 7f0fbac79640 1 -- 192.168.123.100:0/573975739 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0fb0067580 con 0x55f5402c3040 2026-03-20T11:45:36.110 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.112+0000 7f0fbac79640 1 -- 192.168.123.100:0/573975739 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f0fb006c070 con 0x55f5402c3040 2026-03-20T11:45:36.110 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.112+0000 7f0fbac79640 1 -- 192.168.123.100:0/573975739 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0fb0067890 con 0x55f5402c3040 2026-03-20T11:45:36.110 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.112+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/573975739 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55f5402c3040 msgr2=0x55f5402c2820 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:36.110 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.112+0000 7f0fbd1c6300 1 --2- 192.168.123.100:0/573975739 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55f5402c3040 0x55f5402c2820 secure :-1 s=READY pgs=51 cs=0 l=1 rev1=1 crypto rx=0x7f0fb0005820 tx=0x7f0fb0058bd0 comp rx=0 tx=0).stop 2026-03-20T11:45:36.110 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.112+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/573975739 shutdown_connections 2026-03-20T11:45:36.110 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.112+0000 7f0fbd1c6300 1 --2- 192.168.123.100:0/573975739 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55f5402c3040 0x55f5402c2820 unknown :-1 s=CLOSED pgs=51 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:36.110 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.112+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/573975739 >> 192.168.123.100:0/573975739 conn(0x55f540218720 msgr2=0x55f540218b20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:36.110 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.112+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/573975739 shutdown_connections 2026-03-20T11:45:36.110 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.112+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/573975739 wait complete. 2026-03-20T11:45:36.111 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.113+0000 7f0fbd1c6300 1 Processor -- start 2026-03-20T11:45:36.111 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.113+0000 7f0fbd1c6300 1 -- start start 2026-03-20T11:45:36.111 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.113+0000 7f0fbd1c6300 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55f5402c3040 0x55f5402b6bf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:36.111 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.113+0000 7f0fbd1c6300 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x55f5400e9070 con 0x55f5402c3040 2026-03-20T11:45:36.111 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.113+0000 7f0fbbc7b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55f5402c3040 0x55f5402b6bf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:36.111 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.113+0000 7f0fbbc7b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55f5402c3040 0x55f5402b6bf0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:33578/0 (socket says 192.168.123.100:33578) 2026-03-20T11:45:36.111 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.113+0000 7f0fbbc7b640 1 -- 192.168.123.100:0/1429355170 learned_addr learned my addr 192.168.123.100:0/1429355170 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:36.111 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.113+0000 7f0fbbc7b640 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x55f5402b8980 con 0x55f5402c3040 2026-03-20T11:45:36.111 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.113+0000 7f0fbbc7b640 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55f5402c3040 0x55f5402b6bf0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f0fb0069040 tx=0x7f0fb0002fa0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:36.111 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.114+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0fb007b560 con 0x55f5402c3040 2026-03-20T11:45:36.111 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.114+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f0fb006c040 con 0x55f5402c3040 2026-03-20T11:45:36.112 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.114+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x55f5402b9200 con 0x55f5402c3040 2026-03-20T11:45:36.112 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.114+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x55f5402b7370 con 0x55f5402c3040 2026-03-20T11:45:36.112 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.114+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0fb007b830 con 0x55f5402c3040 2026-03-20T11:45:36.112 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.114+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f0fb007b9d0 con 0x55f5402c3040 2026-03-20T11:45:36.112 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.114+0000 7f0fa3fff640 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0f9c03c400 0x7f0f9c05c8b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:36.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.115+0000 7f0fbb47a640 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0f9c03c400 0x7f0f9c05c8b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:36.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.115+0000 7f0fbb47a640 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0f9c03c400 0x7f0f9c05c8b0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f0fa4001000 tx=0x7f0fa4001440 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:36.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.115+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(12..12 src has 1..12) ==== 2920+0+0 (secure 0 0 0) 0x7f0fb00a9650 con 0x55f5402c3040 2026-03-20T11:45:36.113 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:36.115+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool application enable","pool": "rbd","app": "rbd"} v 0) -- 0x55f5402d4f20 con 0x55f5402c3040 2026-03-20T11:45:37.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.063+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "osd pool application enable","pool": "rbd","app": "rbd"}]=0 enabled application 'rbd' on pool 'rbd' v13) ==== 141+0+0 (secure 0 0 0) 0x7f0fb0021810 con 0x55f5402c3040 2026-03-20T11:45:37.062 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.064+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x55f5400fca80 con 0x55f5402c3040 2026-03-20T11:45:37.065 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.067+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_get_version_reply(handle=1 version=13) ==== 24+0+0 (secure 0 0 0) 0x7f0fb0071400 con 0x55f5402c3040 2026-03-20T11:45:37.065 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.067+0000 7f0fb9c77640 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=13}) -- 0x7f0f94002ae0 con 0x55f5402c3040 2026-03-20T11:45:37.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.071+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 <== mon.0 v2:192.168.123.100:3300/0 8 ==== osd_map(13..13 src has 1..13) ==== 653+0+0 (secure 0 0 0) 0x7f0fb00a87d0 con 0x55f5402c3040 2026-03-20T11:45:37.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.071+0000 7f0fbd1c6300 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x55f5402c3410 0x55f540315fc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:37.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.071+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:1 2.3 2:c4c92e5a:::rbd_trash:head [create] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e13) -- 0x55f540316500 con 0x55f5402c3410 2026-03-20T11:45:37.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.071+0000 7f0fbcf04640 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x55f5402c3410 0x55f540315fc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:37.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.072+0000 7f0fbcf04640 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x55f5402c3410 0x55f540315fc0 crc :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:37.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.073+0000 7f0fbcf04640 1 -- 192.168.123.100:0/1429355170 <== osd.1 v2:192.168.123.100:6800/3952598619 1 ==== osd_op_reply(1 rbd_trash [create] v13'1 uv1 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7f0fac002040 con 0x55f5402c3410 2026-03-20T11:45:37.072 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.074+0000 7f0fbd1c6300 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x55f540351870 0x55f540371cf0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:37.072 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.074+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:2 2.2 2:5cea7035:::rbd_info:head [call rbd.metadata_list in=33b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e13) -- 0x55f540372250 con 0x55f540351870 2026-03-20T11:45:37.072 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.074+0000 7f0fbbc7b640 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x55f540351870 0x55f540371cf0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:37.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.075+0000 7f0fbbc7b640 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x55f540351870 0x55f540371cf0 crc :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:37.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.075+0000 7f0fbbc7b640 1 -- 192.168.123.100:0/1429355170 <== osd.0 v2:192.168.123.100:6808/1162726296 1 ==== osd_op_reply(2 rbd_info [call] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 152+0+0 (crc 0 0 0) 0x7f0fb00a83b0 con 0x55f540351870 2026-03-20T11:45:37.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.075+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:3 2.2 2:5cea7035:::rbd_info:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e13) -- 0x55f540350350 con 0x55f540351870 2026-03-20T11:45:37.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.075+0000 7f0fbbc7b640 1 -- 192.168.123.100:0/1429355170 <== osd.0 v2:192.168.123.100:6808/1162726296 2 ==== osd_op_reply(3 rbd_info [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 152+0+0 (crc 0 0 0) 0x7f0fb0070b60 con 0x55f540351870 2026-03-20T11:45:37.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:37.075+0000 7f0fba478640 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- pool_op(create unmanaged snap pool 2 tid 4 name v0) -- 0x7f0f98001f40 con 0x55f5402c3040 2026-03-20T11:45:38.069 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:38.071+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 <== mon.0 v2:192.168.123.100:3300/0 9 ==== pool_op_reply(tid 4 (0) Success v14) ==== 55+0+0 (secure 0 0 0) 0x7f0fb0023ab0 con 0x55f5402c3040 2026-03-20T11:45:38.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:38.071+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=14}) -- 0x7f0f9c05f900 con 0x55f5402c3040 2026-03-20T11:45:38.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:38.071+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 <== mon.0 v2:192.168.123.100:3300/0 10 ==== osd_map(14..14 src has 1..14) ==== 629+0+0 (secure 0 0 0) 0x7f0fb006f720 con 0x55f5402c3040 2026-03-20T11:45:38.070 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:38.071+0000 7f0fb9c77640 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:5 2.2 2:5cea7035:::rbd_info:head [create,write 0~8 in=8b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e14) -- 0x7f0f94005810 con 0x55f540351870 2026-03-20T11:45:38.072 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:38.074+0000 7f0fbbc7b640 1 -- 192.168.123.100:0/1429355170 <== osd.0 v2:192.168.123.100:6808/1162726296 3 ==== osd_op_reply(5 rbd_info [create,write 0~8] v14'1 uv1 ondisk = 0) ==== 194+0+0 (crc 0 0 0) 0x7f0fb0070b60 con 0x55f540351870 2026-03-20T11:45:38.072 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:38.074+0000 7f0fba478640 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- pool_op(delete unmanaged snap pool 2 tid 6 name v14) -- 0x7f0f98003390 con 0x55f5402c3040 2026-03-20T11:45:39.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.073+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 <== mon.0 v2:192.168.123.100:3300/0 11 ==== pool_op_reply(tid 6 (0) Success v15) ==== 43+0+0 (secure 0 0 0) 0x7f0fb00663b0 con 0x55f5402c3040 2026-03-20T11:45:39.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.073+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=15}) -- 0x7f0f9c05f9b0 con 0x55f5402c3040 2026-03-20T11:45:39.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.073+0000 7f0fa3fff640 1 -- 192.168.123.100:0/1429355170 <== mon.0 v2:192.168.123.100:3300/0 12 ==== osd_map(15..15 src has 1..15) ==== 657+0+0 (secure 0 0 0) 0x7f0fb00ae020 con 0x55f5402c3040 2026-03-20T11:45:39.071 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.073+0000 7f0fb9c77640 1 -- 192.168.123.100:0/1429355170 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:7 2.2 2:5cea7035:::rbd_info:head [write 0~19 in=19b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e15) -- 0x7f0f94005bb0 con 0x55f540351870 2026-03-20T11:45:39.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.075+0000 7f0fbbc7b640 1 -- 192.168.123.100:0/1429355170 <== osd.0 v2:192.168.123.100:6808/1162726296 4 ==== osd_op_reply(7 rbd_info [write 0~19] v15'2 uv2 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f0fb0070b60 con 0x55f540351870 2026-03-20T11:45:39.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.075+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x55f540351870 msgr2=0x55f540371cf0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.075+0000 7f0fbd1c6300 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x55f540351870 0x55f540371cf0 crc :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.075+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x55f5402c3410 msgr2=0x55f540315fc0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.073 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.075+0000 7f0fbd1c6300 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x55f5402c3410 0x55f540315fc0 crc :-1 s=READY pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.076+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0f9c03c400 msgr2=0x7f0f9c05c8b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.076+0000 7f0fbd1c6300 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0f9c03c400 0x7f0f9c05c8b0 secure :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0x7f0fa4001000 tx=0x7f0fa4001440 comp rx=0 tx=0).stop 2026-03-20T11:45:39.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.076+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55f5402c3040 msgr2=0x55f5402b6bf0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.076+0000 7f0fbd1c6300 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55f5402c3040 0x55f5402b6bf0 secure :-1 s=READY pgs=52 cs=0 l=1 rev1=1 crypto rx=0x7f0fb0069040 tx=0x7f0fb0002fa0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.076+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 shutdown_connections 2026-03-20T11:45:39.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.076+0000 7f0fbd1c6300 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x55f5402c3410 0x55f540315fc0 unknown :-1 s=CLOSED pgs=7 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.076+0000 7f0fbd1c6300 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x55f540351870 0x55f540371cf0 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.076+0000 7f0fbd1c6300 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0f9c03c400 0x7f0f9c05c8b0 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.076+0000 7f0fbd1c6300 1 --2- 192.168.123.100:0/1429355170 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55f5402c3040 0x55f5402b6bf0 unknown :-1 s=CLOSED pgs=52 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.076+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 >> 192.168.123.100:0/1429355170 conn(0x55f540218720 msgr2=0x55f5402c0700 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:39.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.076+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 shutdown_connections 2026-03-20T11:45:39.074 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.076+0000 7f0fbd1c6300 1 -- 192.168.123.100:0/1429355170 wait complete. 2026-03-20T11:45:39.078 INFO:tasks.ceph:Starting mds daemons in cluster ceph... 2026-03-20T11:45:39.078 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph config log 1 --format=json 2026-03-20T11:45:39.078 INFO:tasks.daemonwatchdog.daemon_watchdog:watchdog starting 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.189+0000 7f10b33fa640 1 Processor -- start 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.189+0000 7f10b33fa640 1 -- start start 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f10b33fa640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f10ac151da0 0x7f10ac172180 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f10b33fa640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f10ac05bff0 con 0x7f10ac057710 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f10b33fa640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f10ac05b720 con 0x7f10ac151da0 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f10b116f640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7f10ac057710 0x7f10ac057ae0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:38492/0 (socket says 192.168.123.100:38492) 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f10b116f640 1 -- 192.168.123.100:0/961839752 learned_addr learned my addr 192.168.123.100:0/961839752 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f10b096e640 1 --2- 192.168.123.100:0/961839752 >> v2:192.168.123.100:3300/0 conn(0x7f10ac151da0 0x7f10ac172180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f109bfff640 1 -- 192.168.123.100:0/961839752 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 588627010 0 0) 0x7f10ac05bff0 con 0x7f10ac057710 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f109bfff640 1 -- 192.168.123.100:0/961839752 --> v1:192.168.123.100:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f1094003610 con 0x7f10ac057710 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f109bfff640 1 -- 192.168.123.100:0/961839752 <== mon.0 v1:192.168.123.100:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 2306439537 0 0) 0x7f1094003610 con 0x7f10ac057710 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f109bfff640 1 -- 192.168.123.100:0/961839752 >> v2:192.168.123.100:3300/0 conn(0x7f10ac151da0 msgr2=0x7f10ac172180 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).mark_down 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f109bfff640 1 --2- 192.168.123.100:0/961839752 >> v2:192.168.123.100:3300/0 conn(0x7f10ac151da0 0x7f10ac172180 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f109bfff640 1 -- 192.168.123.100:0/961839752 --> v1:192.168.123.100:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f10ac05bce0 con 0x7f10ac057710 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f109bfff640 1 -- 192.168.123.100:0/961839752 <== mon.0 v1:192.168.123.100:6789/0 3 ==== mon_map magic: 0 ==== 205+0+0 (unknown 2760865362 0 0) 0x7f109c002d80 con 0x7f10ac057710 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f109bfff640 1 -- 192.168.123.100:0/961839752 >> v1:192.168.123.100:6789/0 conn(0x7f10ac057710 legacy=0x7f10ac057ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f109bfff640 1 --2- 192.168.123.100:0/961839752 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1094003b40 0x7f1094003f30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f109bfff640 1 -- 192.168.123.100:0/961839752 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f10ac05b720 con 0x7f1094003b40 2026-03-20T11:45:39.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.190+0000 7f10b116f640 1 --2- 192.168.123.100:0/961839752 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1094003b40 0x7f1094003f30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:39.189 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.191+0000 7f10b116f640 1 -- 192.168.123.100:0/961839752 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f10ac05bce0 con 0x7f1094003b40 2026-03-20T11:45:39.189 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.191+0000 7f10b116f640 1 --2- 192.168.123.100:0/961839752 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1094003b40 0x7f1094003f30 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f109c00c190 tx=0x7f109c02f630 comp rx=0 tx=0).ready entity=mon.0 client_cookie=12acc537cedcfeff server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:39.189 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.191+0000 7f109bfff640 1 -- 192.168.123.100:0/961839752 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f109c007cf0 con 0x7f1094003b40 2026-03-20T11:45:39.189 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.191+0000 7f109bfff640 1 -- 192.168.123.100:0/961839752 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f109c005af0 con 0x7f1094003b40 2026-03-20T11:45:39.189 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.191+0000 7f109bfff640 1 -- 192.168.123.100:0/961839752 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f109c005e10 con 0x7f1094003b40 2026-03-20T11:45:39.190 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.192+0000 7f10b33fa640 1 -- 192.168.123.100:0/961839752 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1094003b40 msgr2=0x7f1094003f30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.190 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.192+0000 7f10b33fa640 1 --2- 192.168.123.100:0/961839752 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1094003b40 0x7f1094003f30 secure :-1 s=READY pgs=54 cs=0 l=1 rev1=1 crypto rx=0x7f109c00c190 tx=0x7f109c02f630 comp rx=0 tx=0).stop 2026-03-20T11:45:39.190 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.192+0000 7f10b33fa640 1 -- 192.168.123.100:0/961839752 shutdown_connections 2026-03-20T11:45:39.190 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.192+0000 7f10b33fa640 1 --2- 192.168.123.100:0/961839752 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1094003b40 0x7f1094003f30 unknown :-1 s=CLOSED pgs=54 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.190 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.192+0000 7f10b33fa640 1 --2- 192.168.123.100:0/961839752 >> v2:192.168.123.100:3300/0 conn(0x7f10ac151da0 0x7f10ac172180 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.190 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.192+0000 7f10b33fa640 1 -- 192.168.123.100:0/961839752 >> 192.168.123.100:0/961839752 conn(0x7f10ac082bf0 msgr2=0x7f10ac082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:39.190 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.192+0000 7f10b33fa640 1 -- 192.168.123.100:0/961839752 shutdown_connections 2026-03-20T11:45:39.190 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.192+0000 7f10b33fa640 1 -- 192.168.123.100:0/961839752 wait complete. 2026-03-20T11:45:39.191 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.193+0000 7f10b33fa640 1 Processor -- start 2026-03-20T11:45:39.191 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.193+0000 7f10b33fa640 1 -- start start 2026-03-20T11:45:39.191 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.193+0000 7f10b33fa640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1094003b40 0x7f10ac121e60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:39.191 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.193+0000 7f10b33fa640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f10ac172840 con 0x7f1094003b40 2026-03-20T11:45:39.191 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.193+0000 7f10b116f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1094003b40 0x7f10ac121e60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:39.191 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.193+0000 7f10b116f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1094003b40 0x7f10ac121e60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:33618/0 (socket says 192.168.123.100:33618) 2026-03-20T11:45:39.191 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.193+0000 7f10b116f640 1 -- 192.168.123.100:0/2574275002 learned_addr learned my addr 192.168.123.100:0/2574275002 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:39.191 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.193+0000 7f10b116f640 1 -- 192.168.123.100:0/2574275002 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f10ac10cce0 con 0x7f1094003b40 2026-03-20T11:45:39.191 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.193+0000 7f10b116f640 1 --2- 192.168.123.100:0/2574275002 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1094003b40 0x7f10ac121e60 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f109c002820 tx=0x7f109c003030 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:39.191 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.193+0000 7f1099ffb640 1 -- 192.168.123.100:0/2574275002 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f109c044070 con 0x7f1094003b40 2026-03-20T11:45:39.191 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.194+0000 7f1099ffb640 1 -- 192.168.123.100:0/2574275002 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f109c005ca0 con 0x7f1094003b40 2026-03-20T11:45:39.192 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.194+0000 7f1099ffb640 1 -- 192.168.123.100:0/2574275002 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f109c007070 con 0x7f1094003b40 2026-03-20T11:45:39.192 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.194+0000 7f10b33fa640 1 -- 192.168.123.100:0/2574275002 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f10ac05a6e0 con 0x7f1094003b40 2026-03-20T11:45:39.192 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.194+0000 7f10b33fa640 1 -- 192.168.123.100:0/2574275002 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f10ac121840 con 0x7f1094003b40 2026-03-20T11:45:39.192 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.194+0000 7f10b33fa640 1 -- 192.168.123.100:0/2574275002 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f10ac05f370 con 0x7f1094003b40 2026-03-20T11:45:39.192 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.194+0000 7f1099ffb640 1 -- 192.168.123.100:0/2574275002 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f109c051020 con 0x7f1094003b40 2026-03-20T11:45:39.195 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.194+0000 7f1099ffb640 1 --2- 192.168.123.100:0/2574275002 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f107c03dc50 0x7f107c05e100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:39.195 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.195+0000 7f1099ffb640 1 -- 192.168.123.100:0/2574275002 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f109c03c050 con 0x7f1094003b40 2026-03-20T11:45:39.195 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.197+0000 7f10b096e640 1 --2- 192.168.123.100:0/2574275002 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f107c03dc50 0x7f107c05e100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:39.195 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.197+0000 7f1099ffb640 1 -- 192.168.123.100:0/2574275002 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f10ac10cce0 con 0x7f1094003b40 2026-03-20T11:45:39.195 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.197+0000 7f10b096e640 1 --2- 192.168.123.100:0/2574275002 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f107c03dc50 0x7f107c05e100 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f10a0004770 tx=0x7f10a0006f90 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:39.309 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.310+0000 7f10b33fa640 1 -- 192.168.123.100:0/2574275002 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "config log", "num": 1, "format": "json"} v 0) -- 0x7f10ac10c5f0 con 0x7f1094003b40 2026-03-20T11:45:39.309 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.311+0000 7f1099ffb640 1 -- 192.168.123.100:0/2574275002 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "config log", "num": 1, "format": "json"}]=0 v1) ==== 86+0+61 (secure 0 0 0) 0x7f109c04fa10 con 0x7f1094003b40 2026-03-20T11:45:39.309 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:39.311 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.313+0000 7f10b33fa640 1 -- 192.168.123.100:0/2574275002 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f107c03dc50 msgr2=0x7f107c05e100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.311 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.313+0000 7f10b33fa640 1 --2- 192.168.123.100:0/2574275002 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f107c03dc50 0x7f107c05e100 secure :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0x7f10a0004770 tx=0x7f10a0006f90 comp rx=0 tx=0).stop 2026-03-20T11:45:39.311 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.313+0000 7f10b33fa640 1 -- 192.168.123.100:0/2574275002 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1094003b40 msgr2=0x7f10ac121e60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.311 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.313+0000 7f10b33fa640 1 --2- 192.168.123.100:0/2574275002 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1094003b40 0x7f10ac121e60 secure :-1 s=READY pgs=55 cs=0 l=1 rev1=1 crypto rx=0x7f109c002820 tx=0x7f109c003030 comp rx=0 tx=0).stop 2026-03-20T11:45:39.312 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.313+0000 7f10b33fa640 1 -- 192.168.123.100:0/2574275002 shutdown_connections 2026-03-20T11:45:39.312 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.314+0000 7f10b33fa640 1 --2- 192.168.123.100:0/2574275002 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f107c03dc50 0x7f107c05e100 unknown :-1 s=CLOSED pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.312 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.314+0000 7f10b33fa640 1 --2- 192.168.123.100:0/2574275002 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f1094003b40 0x7f10ac121e60 unknown :-1 s=CLOSED pgs=55 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.312 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.314+0000 7f10b33fa640 1 -- 192.168.123.100:0/2574275002 >> 192.168.123.100:0/2574275002 conn(0x7f10ac082bf0 msgr2=0x7f10ac077c80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:39.312 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.314+0000 7f10b33fa640 1 -- 192.168.123.100:0/2574275002 shutdown_connections 2026-03-20T11:45:39.312 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.314+0000 7f10b33fa640 1 -- 192.168.123.100:0/2574275002 wait complete. 2026-03-20T11:45:39.320 INFO:teuthology.orchestra.run.vm00.stdout:[{"version":1,"timestamp":"0.000000","name":"","changes":[]}] 2026-03-20T11:45:39.320 INFO:tasks.ceph_manager:config epoch is 1 2026-03-20T11:45:39.320 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-20T11:45:39.320 INFO:tasks.ceph.ceph_manager.ceph:waiting for mgr available 2026-03-20T11:45:39.320 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph mgr dump --format=json 2026-03-20T11:45:39.391 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.393+0000 7f5b8e7d7640 1 Processor -- start 2026-03-20T11:45:39.391 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.393+0000 7f5b8e7d7640 1 -- start start 2026-03-20T11:45:39.391 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.393+0000 7f5b8e7d7640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f5b8805c810 0x7f5b880574a0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:39.391 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.393+0000 7f5b8e7d7640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f5b88059d60 con 0x7f5b8805cbe0 2026-03-20T11:45:39.391 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.393+0000 7f5b8e7d7640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f5b88058900 con 0x7f5b8805c810 2026-03-20T11:45:39.391 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.393+0000 7f5b87fff640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f5b8805c810 0x7f5b880574a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:39.391 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.393+0000 7f5b87fff640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f5b8805c810 0x7f5b880574a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:33634/0 (socket says 192.168.123.100:33634) 2026-03-20T11:45:39.391 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.393+0000 7f5b87fff640 1 -- 192.168.123.100:0/555741422 learned_addr learned my addr 192.168.123.100:0/555741422 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:39.391 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.393+0000 7f5b87fff640 1 -- 192.168.123.100:0/555741422 >> v1:192.168.123.100:6789/0 conn(0x7f5b8805cbe0 legacy=0x7f5b880579e0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.392 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.394+0000 7f5b87fff640 1 -- 192.168.123.100:0/555741422 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5b88059a50 con 0x7f5b8805c810 2026-03-20T11:45:39.392 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.394+0000 7f5b86ffd640 1 -- 192.168.123.100:0/555741422 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1588071601 0 0) 0x7f5b88059d60 con 0x7f5b8805cbe0 2026-03-20T11:45:39.392 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.394+0000 7f5b87fff640 1 --2- 192.168.123.100:0/555741422 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b8805c810 0x7f5b880574a0 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f5b74009870 tx=0x7f5b7402ee60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=cbbbd37901856d72 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:39.392 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.394+0000 7f5b86ffd640 1 -- 192.168.123.100:0/555741422 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5b7403c070 con 0x7f5b8805c810 2026-03-20T11:45:39.392 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.394+0000 7f5b86ffd640 1 -- 192.168.123.100:0/555741422 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f5b7402faa0 con 0x7f5b8805c810 2026-03-20T11:45:39.392 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.394+0000 7f5b86ffd640 1 -- 192.168.123.100:0/555741422 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5b7402fda0 con 0x7f5b8805c810 2026-03-20T11:45:39.392 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.394+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/555741422 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b8805c810 msgr2=0x7f5b880574a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.392 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.394+0000 7f5b8e7d7640 1 --2- 192.168.123.100:0/555741422 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b8805c810 0x7f5b880574a0 secure :-1 s=READY pgs=57 cs=0 l=1 rev1=1 crypto rx=0x7f5b74009870 tx=0x7f5b7402ee60 comp rx=0 tx=0).stop 2026-03-20T11:45:39.392 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.394+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/555741422 shutdown_connections 2026-03-20T11:45:39.392 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.394+0000 7f5b8e7d7640 1 --2- 192.168.123.100:0/555741422 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b8805c810 0x7f5b880574a0 unknown :-1 s=CLOSED pgs=57 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.392 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.394+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/555741422 >> 192.168.123.100:0/555741422 conn(0x7f5b88082bf0 msgr2=0x7f5b88082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:39.392 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.394+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/555741422 shutdown_connections 2026-03-20T11:45:39.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.395+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/555741422 wait complete. 2026-03-20T11:45:39.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.395+0000 7f5b8e7d7640 1 Processor -- start 2026-03-20T11:45:39.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.395+0000 7f5b8e7d7640 1 -- start start 2026-03-20T11:45:39.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.395+0000 7f5b8e7d7640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b8805c810 0x7f5b88142ab0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:39.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.395+0000 7f5b8e7d7640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f5b88174bd0 con 0x7f5b8805c810 2026-03-20T11:45:39.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.395+0000 7f5b87fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b8805c810 0x7f5b88142ab0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:39.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.395+0000 7f5b87fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b8805c810 0x7f5b88142ab0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:33646/0 (socket says 192.168.123.100:33646) 2026-03-20T11:45:39.393 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.395+0000 7f5b87fff640 1 -- 192.168.123.100:0/1441783382 learned_addr learned my addr 192.168.123.100:0/1441783382 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:39.394 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.396+0000 7f5b87fff640 1 -- 192.168.123.100:0/1441783382 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5b8812d960 con 0x7f5b8805c810 2026-03-20T11:45:39.394 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.396+0000 7f5b87fff640 1 --2- 192.168.123.100:0/1441783382 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b8805c810 0x7f5b88142ab0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f5b74002400 tx=0x7f5b740047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:39.394 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.396+0000 7f5b84ff9640 1 -- 192.168.123.100:0/1441783382 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5b7403c040 con 0x7f5b8805c810 2026-03-20T11:45:39.394 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.396+0000 7f5b84ff9640 1 -- 192.168.123.100:0/1441783382 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f5b74037d00 con 0x7f5b8805c810 2026-03-20T11:45:39.394 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.396+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/1441783382 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f5b8812cb20 con 0x7f5b8805c810 2026-03-20T11:45:39.394 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.396+0000 7f5b84ff9640 1 -- 192.168.123.100:0/1441783382 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5b74004020 con 0x7f5b8805c810 2026-03-20T11:45:39.394 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.396+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/1441783382 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f5b88142520 con 0x7f5b8805c810 2026-03-20T11:45:39.395 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.397+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/1441783382 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5b8812d430 con 0x7f5b8805c810 2026-03-20T11:45:39.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.399+0000 7f5b84ff9640 1 -- 192.168.123.100:0/1441783382 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f5b74050020 con 0x7f5b8805c810 2026-03-20T11:45:39.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.399+0000 7f5b84ff9640 1 --2- 192.168.123.100:0/1441783382 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5b5803dc50 0x7f5b5805e100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:39.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.399+0000 7f5b84ff9640 1 -- 192.168.123.100:0/1441783382 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f5b74077410 con 0x7f5b8805c810 2026-03-20T11:45:39.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.399+0000 7f5b84ff9640 1 -- 192.168.123.100:0/1441783382 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f5b8812d430 con 0x7f5b8805c810 2026-03-20T11:45:39.397 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.400+0000 7f5b877fe640 1 --2- 192.168.123.100:0/1441783382 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5b5803dc50 0x7f5b5805e100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:39.398 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.400+0000 7f5b877fe640 1 --2- 192.168.123.100:0/1441783382 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5b5803dc50 0x7f5b5805e100 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f5b780023c0 tx=0x7f5b78007a10 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:39.545 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.546+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/1441783382 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "mgr dump", "format": "json"} v 0) -- 0x7f5b8805e930 con 0x7f5b8805c810 2026-03-20T11:45:39.546 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.547+0000 7f5b84ff9640 1 -- 192.168.123.100:0/1441783382 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "mgr dump", "format": "json"}]=0 v5) ==== 74+0+97463 (secure 0 0 0) 0x7f5b74044030 con 0x7f5b8805c810 2026-03-20T11:45:39.546 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:39.548 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.550+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/1441783382 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5b5803dc50 msgr2=0x7f5b5805e100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.548 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.550+0000 7f5b8e7d7640 1 --2- 192.168.123.100:0/1441783382 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5b5803dc50 0x7f5b5805e100 secure :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0x7f5b780023c0 tx=0x7f5b78007a10 comp rx=0 tx=0).stop 2026-03-20T11:45:39.548 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.550+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/1441783382 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b8805c810 msgr2=0x7f5b88142ab0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.548 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.550+0000 7f5b8e7d7640 1 --2- 192.168.123.100:0/1441783382 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b8805c810 0x7f5b88142ab0 secure :-1 s=READY pgs=58 cs=0 l=1 rev1=1 crypto rx=0x7f5b74002400 tx=0x7f5b740047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.548 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.550+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/1441783382 shutdown_connections 2026-03-20T11:45:39.548 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.550+0000 7f5b8e7d7640 1 --2- 192.168.123.100:0/1441783382 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5b5803dc50 0x7f5b5805e100 unknown :-1 s=CLOSED pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.548 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.550+0000 7f5b8e7d7640 1 --2- 192.168.123.100:0/1441783382 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5b8805c810 0x7f5b88142ab0 unknown :-1 s=CLOSED pgs=58 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.548 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.550+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/1441783382 >> 192.168.123.100:0/1441783382 conn(0x7f5b88082bf0 msgr2=0x7f5b8805aea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:39.548 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.550+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/1441783382 shutdown_connections 2026-03-20T11:45:39.548 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.550+0000 7f5b8e7d7640 1 -- 192.168.123.100:0/1441783382 wait complete. 2026-03-20T11:45:39.557 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":5,"flags":0,"active_gid":4104,"active_name":"0","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6824","nonce":1022285047},{"type":"v1","addr":"192.168.123.100:6825","nonce":1022285047}]},"active_addr":"192.168.123.100:6825/1022285047","active_change":"2026-03-20T11:45:29.630759+0000","active_mgr_features":4544132024016699391,"available":true,"standbys":[],"modules":["iostat","nfs"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to, use commas to separate multiple","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"certificate_automated_rotation_enabled":{"name":"certificate_automated_rotation_enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"This flag controls whether cephadm automatically rotates certificates upon expiration.","long_desc":"","tags":[],"see_also":[]},"certificate_check_debug_mode":{"name":"certificate_check_debug_mode","type":"bool","level":"dev","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"FOR TESTING ONLY: This flag forces the certificate check instead of waiting for certificate_check_period.","long_desc":"","tags":[],"see_also":[]},"certificate_check_period":{"name":"certificate_check_period","type":"int","level":"advanced","flags":0,"default_value":"1","min":"0","max":"30","enum_allowed":[],"desc":"Specifies how often (in days) the certificate should be checked for validity.","long_desc":"","tags":[],"see_also":[]},"certificate_duration_days":{"name":"certificate_duration_days","type":"int","level":"advanced","flags":0,"default_value":"1095","min":"90","max":"3650","enum_allowed":[],"desc":"Specifies the duration of self certificates generated and signed by cephadm root CA","long_desc":"","tags":[],"see_also":[]},"certificate_renewal_threshold_days":{"name":"certificate_renewal_threshold_days","type":"int","level":"advanced","flags":0,"default_value":"30","min":"10","max":"90","enum_allowed":[],"desc":"Specifies the lead time in days to initiate certificate renewal before expiration.","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.28.1","min":"","max":"","enum_allowed":[],"desc":"Alertmanager container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"Elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:12.3.1","min":"","max":"","enum_allowed":[],"desc":"Grafana container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"Haproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_nginx":{"name":"container_image_nginx","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nginx:sclorg-nginx-126","min":"","max":"","enum_allowed":[],"desc":"Nginx container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.9.1","min":"","max":"","enum_allowed":[],"desc":"Node exporter container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.5","min":"","max":"","enum_allowed":[],"desc":"Nvmeof container image","long_desc":"","tags":[],"see_also":[]},"container_image_oauth2_proxy":{"name":"container_image_oauth2_proxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/oauth2-proxy/oauth2-proxy:v7.6.0","min":"","max":"","enum_allowed":[],"desc":"Oauth2 proxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v3.6.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"docker.io/grafana/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba_metrics":{"name":"container_image_samba_metrics","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-metrics:ceph20-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba metrics container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"docker.io/maxwo/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"Snmp gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"stray_daemon_check_interval":{"name":"stray_daemon_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"how frequently cephadm should check for the presence of stray daemons","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MANAGED_BY_CLUSTERS":{"name":"MANAGED_BY_CLUSTERS","type":"str","level":"advanced","flags":0,"default_value":"[]","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"MULTICLUSTER_CONFIG":{"name":"MULTICLUSTER_CONFIG","type":"str","level":"advanced","flags":0,"default_value":"{}","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROM_ALERT_CREDENTIAL_CACHE_TTL":{"name":"PROM_ALERT_CREDENTIAL_CACHE_TTL","type":"int","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_HOSTNAME_PER_DAEMON":{"name":"RGW_HOSTNAME_PER_DAEMON","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"UNSAFE_TLS_v1_2":{"name":"UNSAFE_TLS_v1_2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crypto_caller":{"name":"crypto_caller","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sso_oauth2":{"name":"sso_oauth2","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"prometheus_tls_secret_name":{"name":"prometheus_tls_secret_name","type":"str","level":"advanced","flags":0,"default_value":"rook-ceph-prometheus-server-tls","min":"","max":"","enum_allowed":[],"desc":"name of tls secret in k8s for prometheus","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"smb","can_run":true,"error_string":"","module_options":{"internal_store_backend":{"name":"internal_store_backend","type":"str","level":"dev","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"set internal store backend. for develoment and testing only","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_orchestration":{"name":"update_orchestration","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically update orchestration when smb resources are changed","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_cloning":{"name":"pause_cloning","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"pause_purging":{"name":"pause_purging","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Pause asynchronous subvolume purge threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"tentacle":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":0,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":1662432391}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":2129654527}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":1318955847}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.100:0","nonce":195774390}]}]} 2026-03-20T11:45:39.558 INFO:tasks.ceph.ceph_manager.ceph:mgr available! 2026-03-20T11:45:39.558 INFO:tasks.ceph.ceph_manager.ceph:waiting for all up 2026-03-20T11:45:39.558 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-20T11:45:39.625 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.627+0000 7f11a437f640 1 Processor -- start 2026-03-20T11:45:39.625 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.627+0000 7f11a437f640 1 -- start start 2026-03-20T11:45:39.625 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.627+0000 7f11a437f640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f119c057710 0x7f119c057ae0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:39.625 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.627+0000 7f11a437f640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f119c05bff0 con 0x7f119c058020 2026-03-20T11:45:39.625 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.627+0000 7f11a437f640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f119c05b720 con 0x7f119c057710 2026-03-20T11:45:39.625 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.627+0000 7f11a18f3640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7f119c058020 0x7f119c07e420 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:38510/0 (socket says 192.168.123.100:38510) 2026-03-20T11:45:39.625 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.627+0000 7f11a18f3640 1 -- 192.168.123.100:0/3016427099 learned_addr learned my addr 192.168.123.100:0/3016427099 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:39.625 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.627+0000 7f11a20f4640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f119c057710 0x7f119c057ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.627+0000 7f11a10f2640 1 -- 192.168.123.100:0/3016427099 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1263302747 0 0) 0x7f119c05bff0 con 0x7f119c058020 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.627+0000 7f11a20f4640 1 -- 192.168.123.100:0/3016427099 >> v1:192.168.123.100:6789/0 conn(0x7f119c058020 legacy=0x7f119c07e420 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.627+0000 7f11a20f4640 1 -- 192.168.123.100:0/3016427099 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f119c05bce0 con 0x7f119c057710 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.628+0000 7f11a20f4640 1 --2- 192.168.123.100:0/3016427099 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f119c057710 0x7f119c057ae0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f118c004770 tx=0x7f118c02eda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=f553e6fed979107e server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.628+0000 7f11a10f2640 1 -- 192.168.123.100:0/3016427099 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f118c03c070 con 0x7f119c057710 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.628+0000 7f11a10f2640 1 -- 192.168.123.100:0/3016427099 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f118c02f9e0 con 0x7f119c057710 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.628+0000 7f11a10f2640 1 -- 192.168.123.100:0/3016427099 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f118c02fce0 con 0x7f119c057710 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.628+0000 7f11a437f640 1 -- 192.168.123.100:0/3016427099 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f119c057710 msgr2=0x7f119c057ae0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.628+0000 7f11a437f640 1 --2- 192.168.123.100:0/3016427099 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f119c057710 0x7f119c057ae0 secure :-1 s=READY pgs=60 cs=0 l=1 rev1=1 crypto rx=0x7f118c004770 tx=0x7f118c02eda0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.628+0000 7f11a437f640 1 -- 192.168.123.100:0/3016427099 shutdown_connections 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.628+0000 7f11a437f640 1 --2- 192.168.123.100:0/3016427099 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f119c057710 0x7f119c057ae0 unknown :-1 s=CLOSED pgs=60 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.628+0000 7f11a437f640 1 -- 192.168.123.100:0/3016427099 >> 192.168.123.100:0/3016427099 conn(0x7f119c082bf0 msgr2=0x7f119c082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.628+0000 7f11a437f640 1 -- 192.168.123.100:0/3016427099 shutdown_connections 2026-03-20T11:45:39.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.628+0000 7f11a437f640 1 -- 192.168.123.100:0/3016427099 wait complete. 2026-03-20T11:45:39.627 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.629+0000 7f11a437f640 1 Processor -- start 2026-03-20T11:45:39.627 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.629+0000 7f11a437f640 1 -- start start 2026-03-20T11:45:39.627 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.629+0000 7f11a437f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f119c057710 0x7f119c1c7a00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:39.627 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.629+0000 7f11a437f640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f119c07eec0 con 0x7f119c057710 2026-03-20T11:45:39.627 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.629+0000 7f11a20f4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f119c057710 0x7f119c1c7a00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:39.627 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.629+0000 7f11a20f4640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f119c057710 0x7f119c1c7a00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:33668/0 (socket says 192.168.123.100:33668) 2026-03-20T11:45:39.627 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.629+0000 7f11a20f4640 1 -- 192.168.123.100:0/3187064288 learned_addr learned my addr 192.168.123.100:0/3187064288 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:39.627 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.629+0000 7f11a20f4640 1 -- 192.168.123.100:0/3187064288 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f119c1b28b0 con 0x7f119c057710 2026-03-20T11:45:39.627 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.629+0000 7f11a20f4640 1 --2- 192.168.123.100:0/3187064288 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f119c057710 0x7f119c1c7a00 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f118c02f9b0 tx=0x7f118c0047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:39.627 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.630+0000 7f1182ffd640 1 -- 192.168.123.100:0/3187064288 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f118c044070 con 0x7f119c057710 2026-03-20T11:45:39.628 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.630+0000 7f1182ffd640 1 -- 192.168.123.100:0/3187064288 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f118c037d80 con 0x7f119c057710 2026-03-20T11:45:39.628 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.630+0000 7f1182ffd640 1 -- 192.168.123.100:0/3187064288 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f118c03c040 con 0x7f119c057710 2026-03-20T11:45:39.628 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.630+0000 7f11a437f640 1 -- 192.168.123.100:0/3187064288 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f119c1b1a70 con 0x7f119c057710 2026-03-20T11:45:39.628 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.630+0000 7f11a437f640 1 -- 192.168.123.100:0/3187064288 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f119c1c7470 con 0x7f119c057710 2026-03-20T11:45:39.629 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.630+0000 7f11a437f640 1 -- 192.168.123.100:0/3187064288 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f119c1b2380 con 0x7f119c057710 2026-03-20T11:45:39.629 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.630+0000 7f1182ffd640 1 -- 192.168.123.100:0/3187064288 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f118c051020 con 0x7f119c057710 2026-03-20T11:45:39.629 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.631+0000 7f1182ffd640 1 --2- 192.168.123.100:0/3187064288 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f116c042160 0x7f116c062610 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:39.629 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.631+0000 7f1182ffd640 1 -- 192.168.123.100:0/3187064288 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f118c076ea0 con 0x7f119c057710 2026-03-20T11:45:39.631 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.633+0000 7f11a18f3640 1 --2- 192.168.123.100:0/3187064288 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f116c042160 0x7f116c062610 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:39.631 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.633+0000 7f1182ffd640 1 -- 192.168.123.100:0/3187064288 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f119c1b2380 con 0x7f119c057710 2026-03-20T11:45:39.631 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.633+0000 7f11a18f3640 1 --2- 192.168.123.100:0/3187064288 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f116c042160 0x7f116c062610 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f119c058480 tx=0x7f11900079e0 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:39.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.748+0000 7f11a437f640 1 -- 192.168.123.100:0/3187064288 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f119c057ae0 con 0x7f119c057710 2026-03-20T11:45:39.746 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.748+0000 7f1182ffd640 1 -- 192.168.123.100:0/3187064288 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v15) ==== 74+0+8849 (secure 0 0 0) 0x7f118c051340 con 0x7f119c057710 2026-03-20T11:45:39.746 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:39.746 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":15,"fsid":"d2998f34-0acb-4cf3-b295-d778019a8c29","created":"2026-03-20T11:45:27.023905+0000","modified":"2026-03-20T11:45:39.071211+0000","last_up_change":"2026-03-20T11:45:31.038830+0000","last_in_change":"2026-03-20T11:45:27.848362+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-20T11:45:31.684065+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"11","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.9900000095367432,"score_stable":2.9900000095367432,"optimal_score":0.67000001668930054,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"rbd","create_time":"2026-03-20T11:45:35.189742+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"15","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":15,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.8799999952316284,"score_stable":1.8799999952316284,"optimal_score":1,"raw_score_acting":1.8799999952316284,"raw_score_stable":1.8799999952316284,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"232f165d-e880-471c-ad41-9cbb77b50aed","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":12,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":1162726296},{"type":"v1","addr":"192.168.123.100:6809","nonce":1162726296}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":1162726296},{"type":"v1","addr":"192.168.123.100:6811","nonce":1162726296}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":1162726296},{"type":"v1","addr":"192.168.123.100:6815","nonce":1162726296}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":1162726296},{"type":"v1","addr":"192.168.123.100:6813","nonce":1162726296}]},"public_addr":"192.168.123.100:6809/1162726296","cluster_addr":"192.168.123.100:6811/1162726296","heartbeat_back_addr":"192.168.123.100:6815/1162726296","heartbeat_front_addr":"192.168.123.100:6813/1162726296","state":["exists","up"]},{"osd":1,"uuid":"59a8c5e0-6c84-431b-ac69-a2f3326598f8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":12,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":3952598619},{"type":"v1","addr":"192.168.123.100:6801","nonce":3952598619}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":3952598619},{"type":"v1","addr":"192.168.123.100:6803","nonce":3952598619}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":3952598619},{"type":"v1","addr":"192.168.123.100:6807","nonce":3952598619}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":3952598619},{"type":"v1","addr":"192.168.123.100:6805","nonce":3952598619}]},"public_addr":"192.168.123.100:6801/3952598619","cluster_addr":"192.168.123.100:6803/3952598619","heartbeat_back_addr":"192.168.123.100:6807/3952598619","heartbeat_front_addr":"192.168.123.100:6805/3952598619","state":["exists","up"]},{"osd":2,"uuid":"3e2deeca-bacd-4ce3-abce-84b4e72b511b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":12,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":2144187382},{"type":"v1","addr":"192.168.123.100:6817","nonce":2144187382}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":2144187382},{"type":"v1","addr":"192.168.123.100:6819","nonce":2144187382}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":2144187382},{"type":"v1","addr":"192.168.123.100:6823","nonce":2144187382}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":2144187382},{"type":"v1","addr":"192.168.123.100:6821","nonce":2144187382}]},"public_addr":"192.168.123.100:6817/2144187382","cluster_addr":"192.168.123.100:6819/2144187382","heartbeat_back_addr":"192.168.123.100:6823/2144187382","heartbeat_front_addr":"192.168.123.100:6821/2144187382","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T11:45:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.751+0000 7f11a437f640 1 -- 192.168.123.100:0/3187064288 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f116c042160 msgr2=0x7f116c062610 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.751+0000 7f11a437f640 1 --2- 192.168.123.100:0/3187064288 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f116c042160 0x7f116c062610 secure :-1 s=READY pgs=18 cs=0 l=1 rev1=1 crypto rx=0x7f119c058480 tx=0x7f11900079e0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.751+0000 7f11a437f640 1 -- 192.168.123.100:0/3187064288 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f119c057710 msgr2=0x7f119c1c7a00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.751+0000 7f11a437f640 1 --2- 192.168.123.100:0/3187064288 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f119c057710 0x7f119c1c7a00 secure :-1 s=READY pgs=61 cs=0 l=1 rev1=1 crypto rx=0x7f118c02f9b0 tx=0x7f118c0047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.751+0000 7f11a437f640 1 -- 192.168.123.100:0/3187064288 shutdown_connections 2026-03-20T11:45:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.751+0000 7f11a437f640 1 --2- 192.168.123.100:0/3187064288 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f116c042160 0x7f116c062610 unknown :-1 s=CLOSED pgs=18 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.751+0000 7f11a437f640 1 --2- 192.168.123.100:0/3187064288 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f119c057710 0x7f119c1c7a00 unknown :-1 s=CLOSED pgs=61 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.751+0000 7f11a437f640 1 -- 192.168.123.100:0/3187064288 >> 192.168.123.100:0/3187064288 conn(0x7f119c082bf0 msgr2=0x7f119c075a80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.751+0000 7f11a437f640 1 -- 192.168.123.100:0/3187064288 shutdown_connections 2026-03-20T11:45:39.749 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.751+0000 7f11a437f640 1 -- 192.168.123.100:0/3187064288 wait complete. 2026-03-20T11:45:39.758 INFO:tasks.ceph.ceph_manager.ceph:all up! 2026-03-20T11:45:39.758 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd dump --format=json 2026-03-20T11:45:39.821 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.823+0000 7f489acae640 1 Processor -- start 2026-03-20T11:45:39.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.823+0000 7f489acae640 1 -- start start 2026-03-20T11:45:39.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.823+0000 7f489acae640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f4894057d10 0x7f48940580e0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:39.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.824+0000 7f489acae640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f48940599c0 con 0x7f4894058620 2026-03-20T11:45:39.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.824+0000 7f489acae640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f489405cd40 con 0x7f4894057d10 2026-03-20T11:45:39.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.824+0000 7f4898a23640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f4894057d10 0x7f48940580e0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:39.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.824+0000 7f4898a23640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f4894057d10 0x7f48940580e0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36048/0 (socket says 192.168.123.100:36048) 2026-03-20T11:45:39.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.824+0000 7f488bfff640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7f4894058620 0x7f4894173080 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:32980/0 (socket says 192.168.123.100:32980) 2026-03-20T11:45:39.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.824+0000 7f4898a23640 1 -- 192.168.123.100:0/371268275 learned_addr learned my addr 192.168.123.100:0/371268275 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:39.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.824+0000 7f4898a23640 1 -- 192.168.123.100:0/371268275 >> v1:192.168.123.100:6789/0 conn(0x7f4894058620 legacy=0x7f4894173080 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.824+0000 7f4898a23640 1 -- 192.168.123.100:0/371268275 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f4894059cd0 con 0x7f4894057d10 2026-03-20T11:45:39.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.824+0000 7f4898a23640 1 --2- 192.168.123.100:0/371268275 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4894057d10 0x7f48940580e0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f4884009870 tx=0x7f488402ee60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=7a6901f71e5dfdda server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:39.822 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.825+0000 7f488b7fe640 1 -- 192.168.123.100:0/371268275 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f488403c070 con 0x7f4894057d10 2026-03-20T11:45:39.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.825+0000 7f488b7fe640 1 -- 192.168.123.100:0/371268275 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f488402faa0 con 0x7f4894057d10 2026-03-20T11:45:39.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.825+0000 7f488b7fe640 1 -- 192.168.123.100:0/371268275 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f488402fda0 con 0x7f4894057d10 2026-03-20T11:45:39.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.825+0000 7f489acae640 1 -- 192.168.123.100:0/371268275 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4894057d10 msgr2=0x7f48940580e0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.825+0000 7f489acae640 1 --2- 192.168.123.100:0/371268275 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4894057d10 0x7f48940580e0 secure :-1 s=READY pgs=63 cs=0 l=1 rev1=1 crypto rx=0x7f4884009870 tx=0x7f488402ee60 comp rx=0 tx=0).stop 2026-03-20T11:45:39.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.825+0000 7f489acae640 1 -- 192.168.123.100:0/371268275 shutdown_connections 2026-03-20T11:45:39.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.825+0000 7f489acae640 1 --2- 192.168.123.100:0/371268275 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4894057d10 0x7f48940580e0 unknown :-1 s=CLOSED pgs=63 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.825+0000 7f489acae640 1 -- 192.168.123.100:0/371268275 >> 192.168.123.100:0/371268275 conn(0x7f4894087340 msgr2=0x7f4894087740 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:39.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.825+0000 7f489acae640 1 -- 192.168.123.100:0/371268275 shutdown_connections 2026-03-20T11:45:39.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.825+0000 7f489acae640 1 -- 192.168.123.100:0/371268275 wait complete. 2026-03-20T11:45:39.823 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f489acae640 1 Processor -- start 2026-03-20T11:45:39.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f489acae640 1 -- start start 2026-03-20T11:45:39.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f489acae640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4894057d10 0x7f4894142940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:39.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f489acae640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f4894174b30 con 0x7f4894057d10 2026-03-20T11:45:39.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f4898a23640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4894057d10 0x7f4894142940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:39.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f4898a23640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4894057d10 0x7f4894142940 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36050/0 (socket says 192.168.123.100:36050) 2026-03-20T11:45:39.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f4898a23640 1 -- 192.168.123.100:0/2139142780 learned_addr learned my addr 192.168.123.100:0/2139142780 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:39.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f4898a23640 1 -- 192.168.123.100:0/2139142780 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f489410d050 con 0x7f4894057d10 2026-03-20T11:45:39.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f4898a23640 1 --2- 192.168.123.100:0/2139142780 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4894057d10 0x7f4894142940 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f488402eea0 tx=0x7f48840047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:39.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f48897fa640 1 -- 192.168.123.100:0/2139142780 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f488403c040 con 0x7f4894057d10 2026-03-20T11:45:39.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f48897fa640 1 -- 192.168.123.100:0/2139142780 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f4884037d00 con 0x7f4894057d10 2026-03-20T11:45:39.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f48897fa640 1 -- 192.168.123.100:0/2139142780 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f4884004020 con 0x7f4894057d10 2026-03-20T11:45:39.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f489acae640 1 -- 192.168.123.100:0/2139142780 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f489410cd20 con 0x7f4894057d10 2026-03-20T11:45:39.824 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.826+0000 7f489acae640 1 -- 192.168.123.100:0/2139142780 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f489410d5a0 con 0x7f4894057d10 2026-03-20T11:45:39.825 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.827+0000 7f48897fa640 1 -- 192.168.123.100:0/2139142780 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f4884050020 con 0x7f4894057d10 2026-03-20T11:45:39.825 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.827+0000 7f48897fa640 1 --2- 192.168.123.100:0/2139142780 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f486403dc00 0x7f486405e0b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:39.825 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.827+0000 7f48897fa640 1 -- 192.168.123.100:0/2139142780 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f4884076f00 con 0x7f4894057d10 2026-03-20T11:45:39.825 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.827+0000 7f489acae640 1 -- 192.168.123.100:0/2139142780 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f489410c7d0 con 0x7f4894057d10 2026-03-20T11:45:39.827 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.829+0000 7f488bfff640 1 --2- 192.168.123.100:0/2139142780 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f486403dc00 0x7f486405e0b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:39.827 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.829+0000 7f488bfff640 1 --2- 192.168.123.100:0/2139142780 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f486403dc00 0x7f486405e0b0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f4894058a80 tx=0x7f487c007af0 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:39.827 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.829+0000 7f48897fa640 1 -- 192.168.123.100:0/2139142780 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f4884044030 con 0x7f4894057d10 2026-03-20T11:45:39.940 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.942+0000 7f489acae640 1 -- 192.168.123.100:0/2139142780 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd dump", "format": "json"} v 0) -- 0x7f48940580e0 con 0x7f4894057d10 2026-03-20T11:45:39.940 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.942+0000 7f48897fa640 1 -- 192.168.123.100:0/2139142780 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd dump", "format": "json"}]=0 v15) ==== 74+0+8849 (secure 0 0 0) 0x7f4884068030 con 0x7f4894057d10 2026-03-20T11:45:39.940 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:39.940 INFO:teuthology.orchestra.run.vm00.stdout:{"epoch":15,"fsid":"d2998f34-0acb-4cf3-b295-d778019a8c29","created":"2026-03-20T11:45:27.023905+0000","modified":"2026-03-20T11:45:39.071211+0000","last_up_change":"2026-03-20T11:45:31.038830+0000","last_in_change":"2026-03-20T11:45:27.848362+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":4,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":2,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"tentacle","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-20T11:45:31.684065+0000","flags":1,"flags_names":"hashpspool","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"11","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":2.9900000095367432,"score_stable":2.9900000095367432,"optimal_score":0.67000001668930054,"raw_score_acting":2,"raw_score_stable":2,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}},{"pool":2,"pool_name":"rbd","create_time":"2026-03-20T11:45:35.189742+0000","flags":8193,"flags_names":"hashpspool,selfmanaged_snaps","type":1,"size":2,"min_size":1,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":8,"pg_placement_num":8,"pg_placement_num_target":8,"pg_num_target":8,"pg_num_pending":8,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"15","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":2,"snap_epoch":15,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"nonprimary_shards":"{}","options":{},"application_metadata":{"rbd":{}},"read_balance":{"score_type":"Fair distribution","score_acting":1.8799999952316284,"score_stable":1.8799999952316284,"optimal_score":1,"raw_score_acting":1.8799999952316284,"raw_score_stable":1.8799999952316284,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"232f165d-e880-471c-ad41-9cbb77b50aed","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":12,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6808","nonce":1162726296},{"type":"v1","addr":"192.168.123.100:6809","nonce":1162726296}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6810","nonce":1162726296},{"type":"v1","addr":"192.168.123.100:6811","nonce":1162726296}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6814","nonce":1162726296},{"type":"v1","addr":"192.168.123.100:6815","nonce":1162726296}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6812","nonce":1162726296},{"type":"v1","addr":"192.168.123.100:6813","nonce":1162726296}]},"public_addr":"192.168.123.100:6809/1162726296","cluster_addr":"192.168.123.100:6811/1162726296","heartbeat_back_addr":"192.168.123.100:6815/1162726296","heartbeat_front_addr":"192.168.123.100:6813/1162726296","state":["exists","up"]},{"osd":1,"uuid":"59a8c5e0-6c84-431b-ac69-a2f3326598f8","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":12,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6800","nonce":3952598619},{"type":"v1","addr":"192.168.123.100:6801","nonce":3952598619}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6802","nonce":3952598619},{"type":"v1","addr":"192.168.123.100:6803","nonce":3952598619}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6806","nonce":3952598619},{"type":"v1","addr":"192.168.123.100:6807","nonce":3952598619}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6804","nonce":3952598619},{"type":"v1","addr":"192.168.123.100:6805","nonce":3952598619}]},"public_addr":"192.168.123.100:6801/3952598619","cluster_addr":"192.168.123.100:6803/3952598619","heartbeat_back_addr":"192.168.123.100:6807/3952598619","heartbeat_front_addr":"192.168.123.100:6805/3952598619","state":["exists","up"]},{"osd":2,"uuid":"3e2deeca-bacd-4ce3-abce-84b4e72b511b","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":12,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6816","nonce":2144187382},{"type":"v1","addr":"192.168.123.100:6817","nonce":2144187382}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6818","nonce":2144187382},{"type":"v1","addr":"192.168.123.100:6819","nonce":2144187382}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6822","nonce":2144187382},{"type":"v1","addr":"192.168.123.100:6823","nonce":2144187382}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.100:6820","nonce":2144187382},{"type":"v1","addr":"192.168.123.100:6821","nonce":2144187382}]},"public_addr":"192.168.123.100:6817/2144187382","cluster_addr":"192.168.123.100:6819/2144187382","heartbeat_back_addr":"192.168.123.100:6823/2144187382","heartbeat_front_addr":"192.168.123.100:6821/2144187382","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4544132024016699391,"old_weight":0,"last_purged_snaps_scrub":"0.000000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"isa","technique":"reed_sol_van"}},"removed_snaps_queue":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_removed_snaps":[{"pool":2,"snaps":[{"begin":2,"length":1}]}],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-20T11:45:39.943 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.945+0000 7f489acae640 1 -- 192.168.123.100:0/2139142780 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f486403dc00 msgr2=0x7f486405e0b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.943 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.945+0000 7f489acae640 1 --2- 192.168.123.100:0/2139142780 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f486403dc00 0x7f486405e0b0 secure :-1 s=READY pgs=19 cs=0 l=1 rev1=1 crypto rx=0x7f4894058a80 tx=0x7f487c007af0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.943 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.945+0000 7f489acae640 1 -- 192.168.123.100:0/2139142780 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4894057d10 msgr2=0x7f4894142940 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:39.943 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.945+0000 7f489acae640 1 --2- 192.168.123.100:0/2139142780 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4894057d10 0x7f4894142940 secure :-1 s=READY pgs=64 cs=0 l=1 rev1=1 crypto rx=0x7f488402eea0 tx=0x7f48840047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.943 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.945+0000 7f489acae640 1 -- 192.168.123.100:0/2139142780 shutdown_connections 2026-03-20T11:45:39.943 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.945+0000 7f489acae640 1 --2- 192.168.123.100:0/2139142780 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f486403dc00 0x7f486405e0b0 unknown :-1 s=CLOSED pgs=19 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.943 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.945+0000 7f489acae640 1 --2- 192.168.123.100:0/2139142780 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f4894057d10 0x7f4894142940 unknown :-1 s=CLOSED pgs=64 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:39.943 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.945+0000 7f489acae640 1 -- 192.168.123.100:0/2139142780 >> 192.168.123.100:0/2139142780 conn(0x7f4894087340 msgr2=0x7f489407b550 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:39.943 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.945+0000 7f489acae640 1 -- 192.168.123.100:0/2139142780 shutdown_connections 2026-03-20T11:45:39.943 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:39.945+0000 7f489acae640 1 -- 192.168.123.100:0/2139142780 wait complete. 2026-03-20T11:45:39.952 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.0 flush_pg_stats 2026-03-20T11:45:39.952 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.1 flush_pg_stats 2026-03-20T11:45:39.952 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph tell osd.2 flush_pg_stats 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.029+0000 7fde73b61640 1 Processor -- start 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.029+0000 7fde73b61640 1 -- start start 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.029+0000 7fde73b61640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7fde6c057710 0x7fde6c057ae0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.029+0000 7fde73b61640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7fde6c05bff0 con 0x7fde6c058020 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.029+0000 7fde73b61640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7fde6c05b720 con 0x7fde6c057710 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde710d5640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7fde6c058020 0x7fde6c07e420 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:32982/0 (socket says 192.168.123.100:32982) 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde710d5640 1 -- 192.168.123.100:0/3947983540 learned_addr learned my addr 192.168.123.100:0/3947983540 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde718d6640 1 --2- 192.168.123.100:0/3947983540 >> v2:192.168.123.100:3300/0 conn(0x7fde6c057710 0x7fde6c057ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde708d4640 1 -- 192.168.123.100:0/3947983540 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1165455466 0 0) 0x7fde6c05bff0 con 0x7fde6c058020 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde708d4640 1 -- 192.168.123.100:0/3947983540 --> v1:192.168.123.100:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fde54003610 con 0x7fde6c058020 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde708d4640 1 -- 192.168.123.100:0/3947983540 <== mon.0 v1:192.168.123.100:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 3803314638 0 0) 0x7fde54003610 con 0x7fde6c058020 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde708d4640 1 -- 192.168.123.100:0/3947983540 >> v2:192.168.123.100:3300/0 conn(0x7fde6c057710 msgr2=0x7fde6c057ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).mark_down 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde708d4640 1 --2- 192.168.123.100:0/3947983540 >> v2:192.168.123.100:3300/0 conn(0x7fde6c057710 0x7fde6c057ae0 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde708d4640 1 -- 192.168.123.100:0/3947983540 --> v1:192.168.123.100:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fde6c05bce0 con 0x7fde6c058020 2026-03-20T11:45:40.028 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde708d4640 1 -- 192.168.123.100:0/3947983540 <== mon.0 v1:192.168.123.100:6789/0 3 ==== mon_map magic: 0 ==== 205+0+0 (unknown 2760865362 0 0) 0x7fde60002d80 con 0x7fde6c058020 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde708d4640 1 -- 192.168.123.100:0/3947983540 >> v1:192.168.123.100:6789/0 conn(0x7fde6c058020 legacy=0x7fde6c07e420 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde708d4640 1 --2- 192.168.123.100:0/3947983540 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde54003b90 0x7fde54003f80 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde708d4640 1 -- 192.168.123.100:0/3947983540 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fde6c05b720 con 0x7fde54003b90 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde718d6640 1 --2- 192.168.123.100:0/3947983540 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde54003b90 0x7fde54003f80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7f240b579640 1 Processor -- start 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7f240b579640 1 -- start start 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7f240b579640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f2404151da0 0x7f2404172180 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.030+0000 7fde718d6640 1 -- 192.168.123.100:0/3947983540 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fde54003610 con 0x7fde54003b90 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7f240b579640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f240405bff0 con 0x7f2404057710 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7f240b579640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f240405b720 con 0x7f2404151da0 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7f2408aed640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f2404151da0 0x7f2404172180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7f2408aed640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f2404151da0 0x7f2404172180 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36082/0 (socket says 192.168.123.100:36082) 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7fde718d6640 1 --2- 192.168.123.100:0/3947983540 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde54003b90 0x7fde54003f80 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fde5c009cc0 tx=0x7fde5c02f7c0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=2b4057626d5159a3 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7f2408aed640 1 -- 192.168.123.100:0/3347937051 learned_addr learned my addr 192.168.123.100:0/3347937051 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7f2408aed640 1 -- 192.168.123.100:0/3347937051 >> v1:192.168.123.100:6789/0 conn(0x7f2404057710 legacy=0x7f2404057ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).mark_down 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7fde708d4640 1 -- 192.168.123.100:0/3947983540 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fde5c03b070 con 0x7fde54003b90 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7fde708d4640 1 -- 192.168.123.100:0/3947983540 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fde5c036530 con 0x7fde54003b90 2026-03-20T11:45:40.029 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7fde708d4640 1 -- 192.168.123.100:0/3947983540 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fde5c0096e0 con 0x7fde54003b90 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7f0c8883a640 1 Processor -- start 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7f2408aed640 1 -- 192.168.123.100:0/3347937051 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f240405bce0 con 0x7f2404151da0 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7f2408aed640 1 --2- 192.168.123.100:0/3347937051 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2404151da0 0x7f2404172180 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f23ec009080 tx=0x7f23ec033330 comp rx=0 tx=0).ready entity=mon.0 client_cookie=e7e858cfee9fdc24 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7fde73b61640 1 -- 192.168.123.100:0/3947983540 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde54003b90 msgr2=0x7fde54003f80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7fde73b61640 1 --2- 192.168.123.100:0/3947983540 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde54003b90 0x7fde54003f80 secure :-1 s=READY pgs=66 cs=0 l=1 rev1=1 crypto rx=0x7fde5c009cc0 tx=0x7fde5c02f7c0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7f23fbfff640 1 -- 192.168.123.100:0/3347937051 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f23ec040070 con 0x7f2404151da0 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7f23fbfff640 1 -- 192.168.123.100:0/3347937051 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f23ec03b440 con 0x7f2404151da0 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7fde73b61640 1 -- 192.168.123.100:0/3947983540 shutdown_connections 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7fde73b61640 1 --2- 192.168.123.100:0/3947983540 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde54003b90 0x7fde54003f80 unknown :-1 s=CLOSED pgs=66 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7fde73b61640 1 --2- 192.168.123.100:0/3947983540 >> v2:192.168.123.100:3300/0 conn(0x7fde6c057710 0x7fde6c057ae0 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7fde73b61640 1 -- 192.168.123.100:0/3947983540 >> 192.168.123.100:0/3947983540 conn(0x7fde6c082bf0 msgr2=0x7fde6c082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7fde73b61640 1 -- 192.168.123.100:0/3947983540 shutdown_connections 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7fde73b61640 1 -- 192.168.123.100:0/3947983540 wait complete. 2026-03-20T11:45:40.030 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7f23fbfff640 1 -- 192.168.123.100:0/3347937051 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f23ec03b740 con 0x7f2404151da0 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.031+0000 7f0c8883a640 1 -- start start 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7f0c8883a640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f0c801501d0 0x7f0c801705b0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7f0c8883a640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f0c8005a550 con 0x7f0c8012f440 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7f0c8883a640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f0c8012ee60 con 0x7f0c801501d0 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7f0c85dae640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f0c801501d0 0x7f0c801705b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7f0c85dae640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f0c801501d0 0x7f0c801705b0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36084/0 (socket says 192.168.123.100:36084) 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7f0c85dae640 1 -- 192.168.123.100:0/858036871 learned_addr learned my addr 192.168.123.100:0/858036871 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7f0c85dae640 1 -- 192.168.123.100:0/858036871 >> v1:192.168.123.100:6789/0 conn(0x7f0c8012f440 legacy=0x7f0c8012f810 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).mark_down 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.032+0000 7f0c85dae640 1 -- 192.168.123.100:0/858036871 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0c80171b00 con 0x7f0c801501d0 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.033+0000 7f240b579640 1 -- 192.168.123.100:0/3347937051 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2404151da0 msgr2=0x7f2404172180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.033+0000 7f240b579640 1 --2- 192.168.123.100:0/3347937051 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2404151da0 0x7f2404172180 secure :-1 s=READY pgs=68 cs=0 l=1 rev1=1 crypto rx=0x7f23ec009080 tx=0x7f23ec033330 comp rx=0 tx=0).stop 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.033+0000 7f0c85dae640 1 --2- 192.168.123.100:0/858036871 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0c801501d0 0x7f0c801705b0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f0c7c009e00 tx=0x7f0c7c02f310 comp rx=0 tx=0).ready entity=mon.0 client_cookie=242c56d2253f371b server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.033+0000 7f240b579640 1 -- 192.168.123.100:0/3347937051 shutdown_connections 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.033+0000 7f240b579640 1 --2- 192.168.123.100:0/3347937051 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2404151da0 0x7f2404172180 unknown :-1 s=CLOSED pgs=68 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.033+0000 7f240b579640 1 -- 192.168.123.100:0/3347937051 >> 192.168.123.100:0/3347937051 conn(0x7f2404082bf0 msgr2=0x7f2404082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.033+0000 7f0c855ad640 1 -- 192.168.123.100:0/858036871 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0c7c03c070 con 0x7f0c801501d0 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.033+0000 7f0c855ad640 1 -- 192.168.123.100:0/858036871 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f0c7c037440 con 0x7f0c801501d0 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.033+0000 7fde73b61640 1 Processor -- start 2026-03-20T11:45:40.031 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.033+0000 7f240b579640 1 -- 192.168.123.100:0/3347937051 shutdown_connections 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.033+0000 7f240b579640 1 -- 192.168.123.100:0/3347937051 wait complete. 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.033+0000 7f0c855ad640 1 -- 192.168.123.100:0/858036871 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0c7c037760 con 0x7f0c801501d0 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7f0c8883a640 1 -- 192.168.123.100:0/858036871 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0c801501d0 msgr2=0x7f0c801705b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7f0c8883a640 1 --2- 192.168.123.100:0/858036871 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0c801501d0 0x7f0c801705b0 secure :-1 s=READY pgs=70 cs=0 l=1 rev1=1 crypto rx=0x7f0c7c009e00 tx=0x7f0c7c02f310 comp rx=0 tx=0).stop 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7f0c8883a640 1 -- 192.168.123.100:0/858036871 shutdown_connections 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7f0c8883a640 1 --2- 192.168.123.100:0/858036871 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0c801501d0 0x7f0c801705b0 unknown :-1 s=CLOSED pgs=70 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7f0c8883a640 1 -- 192.168.123.100:0/858036871 >> 192.168.123.100:0/858036871 conn(0x7f0c80083010 msgr2=0x7f0c8007f510 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7f240b579640 1 Processor -- start 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7fde73b61640 1 -- start start 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7f0c8883a640 1 -- 192.168.123.100:0/858036871 shutdown_connections 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7fde73b61640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde54003b90 0x7fde6c1c9d90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7fde73b61640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fde6c0768e0 con 0x7fde54003b90 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7f240b579640 1 -- start start 2026-03-20T11:45:40.032 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7f0c8883a640 1 -- 192.168.123.100:0/858036871 wait complete. 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f240b579640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2404057710 0x7f240407c8a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7f0c8883a640 1 Processor -- start 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7f0c8883a640 1 -- start start 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7f0c8883a640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0c8012f440 0x7f0c8007e7d0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.034+0000 7f0c8883a640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f0c8007ffb0 con 0x7f0c8012f440 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f0c865af640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0c8012f440 0x7f0c8007e7d0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f0c865af640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0c8012f440 0x7f0c8007e7d0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36106/0 (socket says 192.168.123.100:36106) 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f0c865af640 1 -- 192.168.123.100:0/1217033234 learned_addr learned my addr 192.168.123.100:0/1217033234 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f240b579640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f2404058c30 con 0x7f2404057710 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7fde718d6640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde54003b90 0x7fde6c1c9d90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7fde718d6640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde54003b90 0x7fde6c1c9d90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36092/0 (socket says 192.168.123.100:36092) 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7fde718d6640 1 -- 192.168.123.100:0/3067268127 learned_addr learned my addr 192.168.123.100:0/3067268127 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f0c865af640 1 -- 192.168.123.100:0/1217033234 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0c8014f1e0 con 0x7f0c8012f440 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f24092ee640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2404057710 0x7f240407c8a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f24092ee640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2404057710 0x7f240407c8a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36108/0 (socket says 192.168.123.100:36108) 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f24092ee640 1 -- 192.168.123.100:0/1140463890 learned_addr learned my addr 192.168.123.100:0/1140463890 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f0c865af640 1 --2- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0c8012f440 0x7f0c8007e7d0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f0c7400c4a0 tx=0x7f0c7400c970 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7fde718d6640 1 -- 192.168.123.100:0/3067268127 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fde6c1b4c40 con 0x7fde54003b90 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f0c6affd640 1 -- 192.168.123.100:0/1217033234 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0c740069f0 con 0x7f0c8012f440 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f0c6affd640 1 -- 192.168.123.100:0/1217033234 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f0c74006b90 con 0x7f0c8012f440 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f24092ee640 1 -- 192.168.123.100:0/1140463890 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2404108780 con 0x7f2404057710 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7fde718d6640 1 --2- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde54003b90 0x7fde6c1c9d90 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fde5c004770 tx=0x7fde5c004810 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f24092ee640 1 --2- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2404057710 0x7f240407c8a0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f23f400c4a0 tx=0x7f23f400c970 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.034 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7fde5a7fc640 1 -- 192.168.123.100:0/3067268127 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fde5c046020 con 0x7fde54003b90 2026-03-20T11:45:40.034 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7fde5a7fc640 1 -- 192.168.123.100:0/3067268127 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fde5c040680 con 0x7fde54003b90 2026-03-20T11:45:40.034 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7fde73b61640 1 -- 192.168.123.100:0/3067268127 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fde6c1b3e00 con 0x7fde54003b90 2026-03-20T11:45:40.034 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.036+0000 7fde5a7fc640 1 -- 192.168.123.100:0/3067268127 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fde5c03b040 con 0x7fde54003b90 2026-03-20T11:45:40.034 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.036+0000 7fde73b61640 1 -- 192.168.123.100:0/3067268127 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fde6c1c9800 con 0x7fde54003b90 2026-03-20T11:45:40.034 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.036+0000 7f23f9ffb640 1 -- 192.168.123.100:0/1140463890 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f23f40069f0 con 0x7f2404057710 2026-03-20T11:45:40.034 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.036+0000 7f23f9ffb640 1 -- 192.168.123.100:0/1140463890 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f23f4006b90 con 0x7f2404057710 2026-03-20T11:45:40.034 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f0c6affd640 1 -- 192.168.123.100:0/1217033234 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0c740126f0 con 0x7f0c8012f440 2026-03-20T11:45:40.034 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.035+0000 7f0c8883a640 1 -- 192.168.123.100:0/1217033234 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0c80131aa0 con 0x7f0c8012f440 2026-03-20T11:45:40.034 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.036+0000 7f0c8883a640 1 -- 192.168.123.100:0/1217033234 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f0c80131790 con 0x7f0c8012f440 2026-03-20T11:45:40.034 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.036+0000 7f0c68ff9640 1 -- 192.168.123.100:0/1217033234 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f0c801501d0 con 0x7f0c8012f440 2026-03-20T11:45:40.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7fde5a7fc640 1 -- 192.168.123.100:0/3067268127 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7fde5c040960 con 0x7fde54003b90 2026-03-20T11:45:40.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7fde5a7fc640 1 --2- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fde3c03dbe0 0x7fde3c05e090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7f0c6affd640 1 -- 192.168.123.100:0/1217033234 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f0c74012890 con 0x7f0c8012f440 2026-03-20T11:45:40.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7f0c6affd640 1 --2- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0c5003dbe0 0x7f0c5005e090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7f0c6affd640 1 -- 192.168.123.100:0/1217033234 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f0c74052b10 con 0x7f0c8012f440 2026-03-20T11:45:40.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7f0c6affd640 1 --2- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x7f0c50060e50 0x7f0c500812c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7f0c85dae640 1 --2- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0c5003dbe0 0x7f0c5005e090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7f0c6affd640 1 -- 192.168.123.100:0/1217033234 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f0c74026e50 con 0x7f0c50060e50 2026-03-20T11:45:40.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7f0c6affd640 1 -- 192.168.123.100:0/1217033234 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_get_version_reply(handle=1 version=15) ==== 24+0+0 (secure 0 0 0) 0x7f0c74052e00 con 0x7f0c8012f440 2026-03-20T11:45:40.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7fde710d5640 1 --2- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fde3c03dbe0 0x7fde3c05e090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7fde710d5640 1 --2- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fde3c03dbe0 0x7fde3c05e090 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fde6c058480 tx=0x7fde60002cd0 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.035 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7f0c86db0640 1 --2- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x7f0c50060e50 0x7f0c500812c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.036 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.038+0000 7fde5a7fc640 1 -- 192.168.123.100:0/3067268127 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7fde5c00f040 con 0x7fde54003b90 2026-03-20T11:45:40.036 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.038+0000 7fde73b61640 1 -- 192.168.123.100:0/3067268127 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7fde6c058020 con 0x7fde54003b90 2026-03-20T11:45:40.036 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.038+0000 7f0c85dae640 1 --2- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0c5003dbe0 0x7f0c5005e090 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f0c7c002780 tx=0x7f0c7c033000 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.036 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.038+0000 7fde5a7fc640 1 --2- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7fde3c060d50 0x7fde3c0811c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.036 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.038+0000 7fde5a7fc640 1 -- 192.168.123.100:0/3067268127 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7fde5c03be50 con 0x7fde3c060d50 2026-03-20T11:45:40.036 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.038+0000 7fde5a7fc640 1 -- 192.168.123.100:0/3067268127 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_get_version_reply(handle=1 version=15) ==== 24+0+0 (secure 0 0 0) 0x7fde5c07d050 con 0x7fde54003b90 2026-03-20T11:45:40.036 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.038+0000 7f0c86db0640 1 --2- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x7f0c50060e50 0x7f0c500812c0 crc :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.036 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7f23f9ffb640 1 -- 192.168.123.100:0/1140463890 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f23f40126f0 con 0x7f2404057710 2026-03-20T11:45:40.036 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7f240b579640 1 -- 192.168.123.100:0/1140463890 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f2404108470 con 0x7f2404057710 2026-03-20T11:45:40.036 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.037+0000 7f240b579640 1 -- 192.168.123.100:0/1140463890 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f2404113a60 con 0x7f2404057710 2026-03-20T11:45:40.036 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.038+0000 7fde720d7640 1 --2- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7fde3c060d50 0x7fde3c0811c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.036 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.038+0000 7f0c6affd640 1 -- 192.168.123.100:0/1217033234 <== osd.2 v2:192.168.123.100:6816/2144187382 1 ==== command_reply(tid 1: 0 ) ==== 8+0+32513 (crc 0 0 0) 0x7f0c74026e50 con 0x7f0c50060e50 2026-03-20T11:45:40.036 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.038+0000 7fde720d7640 1 --2- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7fde3c060d50 0x7fde3c0811c0 crc :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.037 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.039+0000 7f240b579640 1 -- 192.168.123.100:0/1140463890 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_get_version(what=osdmap handle=1) -- 0x7f23c8000f80 con 0x7f2404057710 2026-03-20T11:45:40.037 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.039+0000 7fde5a7fc640 1 -- 192.168.123.100:0/3067268127 <== osd.1 v2:192.168.123.100:6800/3952598619 1 ==== command_reply(tid 1: 0 ) ==== 8+0+32513 (crc 0 0 0) 0x7fde5c03be50 con 0x7fde3c060d50 2026-03-20T11:45:40.037 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.039+0000 7f23f9ffb640 1 -- 192.168.123.100:0/1140463890 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f23f4012890 con 0x7f2404057710 2026-03-20T11:45:40.037 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.039+0000 7f23f9ffb640 1 --2- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f23d403db90 0x7f23d405e040 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.038 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.040+0000 7f23f9ffb640 1 -- 192.168.123.100:0/1140463890 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f23f4052ab0 con 0x7f2404057710 2026-03-20T11:45:40.038 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.040+0000 7f2408aed640 1 --2- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f23d403db90 0x7f23d405e040 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.038 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.040+0000 7f23f9ffb640 1 --2- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7f23d4060d80 0x7f23d40811f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.038 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.040+0000 7f23f9ffb640 1 -- 192.168.123.100:0/1140463890 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- command(tid 1: {"prefix": "get_command_descriptions"}) -- 0x7f23f4006b90 con 0x7f23d4060d80 2026-03-20T11:45:40.038 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.040+0000 7f23f9ffb640 1 -- 192.168.123.100:0/1140463890 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_get_version_reply(handle=1 version=15) ==== 24+0+0 (secure 0 0 0) 0x7f23f4052e70 con 0x7f2404057710 2026-03-20T11:45:40.038 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.040+0000 7f2408aed640 1 --2- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f23d403db90 0x7f23d405e040 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f23ec009670 tx=0x7f23ec037000 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.038 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.040+0000 7f2409aef640 1 --2- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7f23d4060d80 0x7f23d40811f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.038 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.040+0000 7f2409aef640 1 --2- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7f23d4060d80 0x7f23d40811f0 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.039 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.041+0000 7f23f9ffb640 1 -- 192.168.123.100:0/1140463890 <== osd.0 v2:192.168.123.100:6808/1162726296 1 ==== command_reply(tid 1: 0 ) ==== 8+0+32513 (crc 0 0 0) 0x7f23f4006b90 con 0x7f23d4060d80 2026-03-20T11:45:40.045 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.047+0000 7fde73b61640 1 -- 192.168.123.100:0/3067268127 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7fde6c1b4710 con 0x7fde3c060d50 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7fde5a7fc640 1 -- 192.168.123.100:0/3067268127 <== osd.1 v2:192.168.123.100:6800/3952598619 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7fde6c1b4710 con 0x7fde3c060d50 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7f0c68ff9640 1 -- 192.168.123.100:0/1217033234 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f0c80150380 con 0x7f0c50060e50 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7fde73b61640 1 -- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7fde3c060d50 msgr2=0x7fde3c0811c0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7fde73b61640 1 --2- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7fde3c060d50 0x7fde3c0811c0 crc :-1 s=READY pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7f0c6affd640 1 -- 192.168.123.100:0/1217033234 <== osd.2 v2:192.168.123.100:6816/2144187382 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7f0c80150380 con 0x7f0c50060e50 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7fde73b61640 1 -- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fde3c03dbe0 msgr2=0x7fde3c05e090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7fde73b61640 1 --2- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fde3c03dbe0 0x7fde3c05e090 secure :-1 s=READY pgs=20 cs=0 l=1 rev1=1 crypto rx=0x7fde6c058480 tx=0x7fde60002cd0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7fde73b61640 1 -- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde54003b90 msgr2=0x7fde6c1c9d90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7fde73b61640 1 --2- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde54003b90 0x7fde6c1c9d90 secure :-1 s=READY pgs=72 cs=0 l=1 rev1=1 crypto rx=0x7fde5c004770 tx=0x7fde5c004810 comp rx=0 tx=0).stop 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7f0c68ff9640 1 -- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x7f0c50060e50 msgr2=0x7f0c500812c0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7f0c68ff9640 1 --2- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x7f0c50060e50 0x7f0c500812c0 crc :-1 s=READY pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7f0c68ff9640 1 -- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0c5003dbe0 msgr2=0x7f0c5005e090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7f0c68ff9640 1 --2- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0c5003dbe0 0x7f0c5005e090 secure :-1 s=READY pgs=21 cs=0 l=1 rev1=1 crypto rx=0x7f0c7c002780 tx=0x7f0c7c033000 comp rx=0 tx=0).stop 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7f0c68ff9640 1 -- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0c8012f440 msgr2=0x7f0c8007e7d0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7f0c68ff9640 1 --2- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0c8012f440 0x7f0c8007e7d0 secure :-1 s=READY pgs=71 cs=0 l=1 rev1=1 crypto rx=0x7f0c7400c4a0 tx=0x7f0c7400c970 comp rx=0 tx=0).stop 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7fde73b61640 1 -- 192.168.123.100:0/3067268127 shutdown_connections 2026-03-20T11:45:40.046 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7fde73b61640 1 --2- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7fde3c060d50 0x7fde3c0811c0 unknown :-1 s=CLOSED pgs=8 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7fde73b61640 1 --2- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fde3c03dbe0 0x7fde3c05e090 unknown :-1 s=CLOSED pgs=20 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7fde73b61640 1 --2- 192.168.123.100:0/3067268127 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fde54003b90 0x7fde6c1c9d90 unknown :-1 s=CLOSED pgs=72 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.048+0000 7fde73b61640 1 -- 192.168.123.100:0/3067268127 >> 192.168.123.100:0/3067268127 conn(0x7fde6c082bf0 msgr2=0x7fde6c075a80 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:40.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.049+0000 7f0c68ff9640 1 -- 192.168.123.100:0/1217033234 shutdown_connections 2026-03-20T11:45:40.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.049+0000 7f0c68ff9640 1 --2- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x7f0c50060e50 0x7f0c500812c0 unknown :-1 s=CLOSED pgs=5 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.049+0000 7f0c68ff9640 1 --2- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0c5003dbe0 0x7f0c5005e090 unknown :-1 s=CLOSED pgs=21 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.049+0000 7f0c68ff9640 1 --2- 192.168.123.100:0/1217033234 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0c8012f440 0x7f0c8007e7d0 unknown :-1 s=CLOSED pgs=71 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.049+0000 7f0c68ff9640 1 -- 192.168.123.100:0/1217033234 >> 192.168.123.100:0/1217033234 conn(0x7f0c80083010 msgr2=0x7f0c8012d060 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:40.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.049+0000 7fde73b61640 1 -- 192.168.123.100:0/3067268127 shutdown_connections 2026-03-20T11:45:40.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.049+0000 7f0c68ff9640 1 -- 192.168.123.100:0/1217033234 shutdown_connections 2026-03-20T11:45:40.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.049+0000 7f0c68ff9640 1 -- 192.168.123.100:0/1217033234 wait complete. 2026-03-20T11:45:40.047 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.049+0000 7fde73b61640 1 -- 192.168.123.100:0/3067268127 wait complete. 2026-03-20T11:45:40.048 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.050+0000 7f240b579640 1 -- 192.168.123.100:0/1140463890 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- command(tid 2: {"prefix": "flush_pg_stats"}) -- 0x7f23c8002d00 con 0x7f23d4060d80 2026-03-20T11:45:40.048 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.050+0000 7f23f9ffb640 1 -- 192.168.123.100:0/1140463890 <== osd.0 v2:192.168.123.100:6808/1162726296 2 ==== command_reply(tid 2: 0 ) ==== 8+0+11 (crc 0 0 0) 0x7f23c8002d00 con 0x7f23d4060d80 2026-03-20T11:45:40.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.051+0000 7f240b579640 1 -- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7f23d4060d80 msgr2=0x7f23d40811f0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.051+0000 7f240b579640 1 --2- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7f23d4060d80 0x7f23d40811f0 crc :-1 s=READY pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.051+0000 7f240b579640 1 -- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f23d403db90 msgr2=0x7f23d405e040 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.051+0000 7f240b579640 1 --2- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f23d403db90 0x7f23d405e040 secure :-1 s=READY pgs=22 cs=0 l=1 rev1=1 crypto rx=0x7f23ec009670 tx=0x7f23ec037000 comp rx=0 tx=0).stop 2026-03-20T11:45:40.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.051+0000 7f240b579640 1 -- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2404057710 msgr2=0x7f240407c8a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.051+0000 7f240b579640 1 --2- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2404057710 0x7f240407c8a0 secure :-1 s=READY pgs=73 cs=0 l=1 rev1=1 crypto rx=0x7f23f400c4a0 tx=0x7f23f400c970 comp rx=0 tx=0).stop 2026-03-20T11:45:40.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.051+0000 7f240b579640 1 -- 192.168.123.100:0/1140463890 shutdown_connections 2026-03-20T11:45:40.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.051+0000 7f240b579640 1 --2- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7f23d4060d80 0x7f23d40811f0 unknown :-1 s=CLOSED pgs=6 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.051+0000 7f240b579640 1 --2- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f23d403db90 0x7f23d405e040 unknown :-1 s=CLOSED pgs=22 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.051+0000 7f240b579640 1 --2- 192.168.123.100:0/1140463890 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2404057710 0x7f240407c8a0 unknown :-1 s=CLOSED pgs=73 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.051+0000 7f240b579640 1 -- 192.168.123.100:0/1140463890 >> 192.168.123.100:0/1140463890 conn(0x7f2404082bf0 msgr2=0x7f2404074680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:40.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.051+0000 7f240b579640 1 -- 192.168.123.100:0/1140463890 shutdown_connections 2026-03-20T11:45:40.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.051+0000 7f240b579640 1 -- 192.168.123.100:0/1140463890 wait complete. 2026-03-20T11:45:40.055 INFO:teuthology.orchestra.run.vm00.stdout:34359738371 2026-03-20T11:45:40.055 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-20T11:45:40.056 INFO:teuthology.orchestra.run.vm00.stdout:34359738371 2026-03-20T11:45:40.056 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-20T11:45:40.062 INFO:teuthology.orchestra.run.vm00.stdout:34359738371 2026-03-20T11:45:40.062 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-20T11:45:40.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.181+0000 7f37c89cb640 1 Processor -- start 2026-03-20T11:45:40.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.181+0000 7f37c89cb640 1 -- start start 2026-03-20T11:45:40.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.181+0000 7f37c89cb640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f37c0058330 0x7f37c0058700 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f37c89cb640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f37c0059d30 con 0x7f37c0058c40 2026-03-20T11:45:40.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f37c89cb640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f37c0083880 con 0x7f37c0058330 2026-03-20T11:45:40.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f37c6740640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f37c0058330 0x7f37c0058700 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f37c6740640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f37c0058330 0x7f37c0058700 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36124/0 (socket says 192.168.123.100:36124) 2026-03-20T11:45:40.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f37c6740640 1 -- 192.168.123.100:0/771540230 learned_addr learned my addr 192.168.123.100:0/771540230 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:40.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f2b9bfff640 1 Processor -- start 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f37c6740640 1 -- 192.168.123.100:0/771540230 >> v1:192.168.123.100:6789/0 conn(0x7f37c0058c40 legacy=0x7f37c0075090 unknown :-1 s=STATE_CONNECTING l=0).mark_down 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f37c6740640 1 -- 192.168.123.100:0/771540230 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f37c005cec0 con 0x7f37c0058330 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f2b9bfff640 1 -- start start 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f2b9bfff640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f2b9c153ec0 0x7f2b9c1742a0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f2b9bfff640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f2b9c05aea0 con 0x7f2b9c058da0 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f2b9bfff640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f2b9c059c80 con 0x7f2b9c153ec0 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f2b9affd640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7f2b9c058da0 0x7f2b9c059170 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:33008/0 (socket says 192.168.123.100:33008) 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f2b9affd640 1 -- 192.168.123.100:0/546597815 learned_addr learned my addr 192.168.123.100:0/546597815 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.182+0000 7f2b9a7fc640 1 --2- 192.168.123.100:0/546597815 >> v2:192.168.123.100:3300/0 conn(0x7f2b9c153ec0 0x7f2b9c1742a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.183+0000 7f2b99ffb640 1 -- 192.168.123.100:0/546597815 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 2205788918 0 0) 0x7f2b9c05aea0 con 0x7f2b9c058da0 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.183+0000 7f2b99ffb640 1 -- 192.168.123.100:0/546597815 --> v1:192.168.123.100:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2b88003610 con 0x7f2b9c058da0 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.183+0000 7f2b9a7fc640 1 -- 192.168.123.100:0/546597815 >> v1:192.168.123.100:6789/0 conn(0x7f2b9c058da0 legacy=0x7f2b9c059170 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.183+0000 7f2b9a7fc640 1 -- 192.168.123.100:0/546597815 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2b9c05ab90 con 0x7f2b9c153ec0 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.183+0000 7f2b9a7fc640 1 --2- 192.168.123.100:0/546597815 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2b9c153ec0 0x7f2b9c1742a0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f2b8c004af0 tx=0x7f2b8c02f210 comp rx=0 tx=0).ready entity=mon.0 client_cookie=76341c7b55a52fb1 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.183+0000 7f2b99ffb640 1 -- 192.168.123.100:0/546597815 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2b8c03c070 con 0x7f2b9c153ec0 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.183+0000 7f2b99ffb640 1 -- 192.168.123.100:0/546597815 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f2b8c02fdd0 con 0x7f2b9c153ec0 2026-03-20T11:45:40.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.183+0000 7f2b99ffb640 1 -- 192.168.123.100:0/546597815 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2b8c037590 con 0x7f2b9c153ec0 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f2b9bfff640 1 -- 192.168.123.100:0/546597815 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2b9c153ec0 msgr2=0x7f2b9c1742a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f2b9bfff640 1 --2- 192.168.123.100:0/546597815 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2b9c153ec0 0x7f2b9c1742a0 secure :-1 s=READY pgs=76 cs=0 l=1 rev1=1 crypto rx=0x7f2b8c004af0 tx=0x7f2b8c02f210 comp rx=0 tx=0).stop 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.183+0000 7f37c6740640 1 --2- 192.168.123.100:0/771540230 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f37c0058330 0x7f37c0058700 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f37b4005dc0 tx=0x7f37b40312d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=afef6f77e1436258 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.183+0000 7f37c573e640 1 -- 192.168.123.100:0/771540230 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f37b4037070 con 0x7f37c0058330 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f37c573e640 1 -- 192.168.123.100:0/771540230 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f37b4031e00 con 0x7f37c0058330 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f37c573e640 1 -- 192.168.123.100:0/771540230 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f37b403b570 con 0x7f37c0058330 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f37c89cb640 1 -- 192.168.123.100:0/771540230 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f37c0058330 msgr2=0x7f37c0058700 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f37c89cb640 1 --2- 192.168.123.100:0/771540230 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f37c0058330 0x7f37c0058700 secure :-1 s=READY pgs=77 cs=0 l=1 rev1=1 crypto rx=0x7f37b4005dc0 tx=0x7f37b40312d0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f37c89cb640 1 -- 192.168.123.100:0/771540230 shutdown_connections 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f37c89cb640 1 --2- 192.168.123.100:0/771540230 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f37c0058330 0x7f37c0058700 unknown :-1 s=CLOSED pgs=77 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f37c89cb640 1 -- 192.168.123.100:0/771540230 >> 192.168.123.100:0/771540230 conn(0x7f37c0083010 msgr2=0x7f37c007f510 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f37c89cb640 1 -- 192.168.123.100:0/771540230 shutdown_connections 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f37c89cb640 1 -- 192.168.123.100:0/771540230 wait complete. 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f37c89cb640 1 Processor -- start 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f2b9bfff640 1 -- 192.168.123.100:0/546597815 shutdown_connections 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f2b9bfff640 1 --2- 192.168.123.100:0/546597815 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2b9c153ec0 0x7f2b9c1742a0 unknown :-1 s=CLOSED pgs=76 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f2b9bfff640 1 -- 192.168.123.100:0/546597815 >> 192.168.123.100:0/546597815 conn(0x7f2b9c087390 msgr2=0x7f2b9c0570b0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:40.183 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.184+0000 7f2b9bfff640 1 -- 192.168.123.100:0/546597815 shutdown_connections 2026-03-20T11:45:40.183 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.185+0000 7f37c89cb640 1 -- start start 2026-03-20T11:45:40.183 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.185+0000 7f37c89cb640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f37c0058330 0x7f37c01d14a0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.183 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.185+0000 7f37c89cb640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f37c0075920 con 0x7f37c0058330 2026-03-20T11:45:40.183 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.185+0000 7f2b9bfff640 1 -- 192.168.123.100:0/546597815 wait complete. 2026-03-20T11:45:40.183 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.185+0000 7f37c6740640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f37c0058330 0x7f37c01d14a0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.183 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.185+0000 7f37c6740640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f37c0058330 0x7f37c01d14a0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36142/0 (socket says 192.168.123.100:36142) 2026-03-20T11:45:40.183 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.185+0000 7f37c6740640 1 -- 192.168.123.100:0/3215009650 learned_addr learned my addr 192.168.123.100:0/3215009650 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:40.183 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.186+0000 7f37c6740640 1 -- 192.168.123.100:0/3215009650 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f37c01d0f40 con 0x7f37c0058330 2026-03-20T11:45:40.184 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.186+0000 7f37c6740640 1 --2- 192.168.123.100:0/3215009650 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f37c0058330 0x7f37c01d14a0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f37b4008a60 tx=0x7f37b40047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.184 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.186+0000 7f37aaffd640 1 -- 192.168.123.100:0/3215009650 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f37b4004150 con 0x7f37c0058330 2026-03-20T11:45:40.184 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.186+0000 7f37aaffd640 1 -- 192.168.123.100:0/3215009650 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f37b40042f0 con 0x7f37c0058330 2026-03-20T11:45:40.184 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.186+0000 7f2b9bfff640 1 Processor -- start 2026-03-20T11:45:40.184 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.186+0000 7f37aaffd640 1 -- 192.168.123.100:0/3215009650 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f37b4003700 con 0x7f37c0058330 2026-03-20T11:45:40.184 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.186+0000 7f37c89cb640 1 -- 192.168.123.100:0/3215009650 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f37c007edb0 con 0x7f37c0058330 2026-03-20T11:45:40.184 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.186+0000 7f37c89cb640 1 -- 192.168.123.100:0/3215009650 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f37c01d3150 con 0x7f37c0058330 2026-03-20T11:45:40.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.187+0000 7f2b9bfff640 1 -- start start 2026-03-20T11:45:40.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.187+0000 7f37c89cb640 1 -- 192.168.123.100:0/3215009650 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f37c0058700 con 0x7f37c0058330 2026-03-20T11:45:40.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.187+0000 7f37aaffd640 1 -- 192.168.123.100:0/3215009650 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f37b4044070 con 0x7f37c0058330 2026-03-20T11:45:40.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.187+0000 7f37aaffd640 1 --2- 192.168.123.100:0/3215009650 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f379003dbe0 0x7f379005e090 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.187+0000 7f37aaffd640 1 -- 192.168.123.100:0/3215009650 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f37b4042050 con 0x7f37c0058330 2026-03-20T11:45:40.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.187+0000 7f37c5f3f640 1 --2- 192.168.123.100:0/3215009650 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f379003dbe0 0x7f379005e090 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.188+0000 7f2b9bfff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2b9c058da0 0x7f2b9c082c60 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.188+0000 7f2b9bfff640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f2b9c1747e0 con 0x7f2b9c058da0 2026-03-20T11:45:40.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.188+0000 7f2b9affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2b9c058da0 0x7f2b9c082c60 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.188+0000 7f2b9affd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2b9c058da0 0x7f2b9c082c60 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36154/0 (socket says 192.168.123.100:36154) 2026-03-20T11:45:40.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.188+0000 7f2b9affd640 1 -- 192.168.123.100:0/2869359326 learned_addr learned my addr 192.168.123.100:0/2869359326 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:40.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.190+0000 7f37c5f3f640 1 --2- 192.168.123.100:0/3215009650 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f379003dbe0 0x7f379005e090 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f37c00590a0 tx=0x7f37ac007b40 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.193 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.195+0000 7f2b9affd640 1 -- 192.168.123.100:0/2869359326 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2b9c10cf90 con 0x7f2b9c058da0 2026-03-20T11:45:40.193 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.195+0000 7f2b9affd640 1 --2- 192.168.123.100:0/2869359326 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2b9c058da0 0x7f2b9c082c60 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f2b90007c40 tx=0x7f2b9000cb20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.193 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.195+0000 7f37aaffd640 1 -- 192.168.123.100:0/3215009650 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f37b4076240 con 0x7f37c0058330 2026-03-20T11:45:40.193 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.195+0000 7f2b7b7fe640 1 -- 192.168.123.100:0/2869359326 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2b90017070 con 0x7f2b9c058da0 2026-03-20T11:45:40.193 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.195+0000 7f2b7b7fe640 1 -- 192.168.123.100:0/2869359326 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f2b900059a0 con 0x7f2b9c058da0 2026-03-20T11:45:40.193 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.195+0000 7f2b7b7fe640 1 -- 192.168.123.100:0/2869359326 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2b90005ca0 con 0x7f2b9c058da0 2026-03-20T11:45:40.194 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.196+0000 7f2b9bfff640 1 -- 192.168.123.100:0/2869359326 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f2b9c0825b0 con 0x7f2b9c058da0 2026-03-20T11:45:40.194 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.196+0000 7f2b9bfff640 1 -- 192.168.123.100:0/2869359326 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f2b9c083380 con 0x7f2b9c058da0 2026-03-20T11:45:40.195 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.197+0000 7f2b7b7fe640 1 -- 192.168.123.100:0/2869359326 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f2b90007070 con 0x7f2b9c058da0 2026-03-20T11:45:40.195 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.197+0000 7f2b7b7fe640 1 --2- 192.168.123.100:0/2869359326 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f2b6003dc50 0x7f2b6005e100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.195 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.197+0000 7f2b7b7fe640 1 -- 192.168.123.100:0/2869359326 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f2b90051620 con 0x7f2b9c058da0 2026-03-20T11:45:40.196 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.197+0000 7f2b9bfff640 1 -- 192.168.123.100:0/2869359326 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2b9c059170 con 0x7f2b9c058da0 2026-03-20T11:45:40.196 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.198+0000 7f2b9a7fc640 1 --2- 192.168.123.100:0/2869359326 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f2b6003dc50 0x7f2b6005e100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.196 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.199+0000 7f2b9a7fc640 1 --2- 192.168.123.100:0/2869359326 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f2b6003dc50 0x7f2b6005e100 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f2b8c02f250 tx=0x7f2b8c033000 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.198 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.200+0000 7f2b7b7fe640 1 -- 192.168.123.100:0/2869359326 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f2b90016170 con 0x7f2b9c058da0 2026-03-20T11:45:40.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.233+0000 7f7f9a0d1640 1 Processor -- start 2026-03-20T11:45:40.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.233+0000 7f7f9a0d1640 1 -- start start 2026-03-20T11:45:40.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.233+0000 7f7f9a0d1640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f7f94151da0 0x7f7f94172180 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.233+0000 7f7f9a0d1640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f7f9405bff0 con 0x7f7f94057710 2026-03-20T11:45:40.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.233+0000 7f7f9a0d1640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f7f9405b720 con 0x7f7f94151da0 2026-03-20T11:45:40.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.233+0000 7f7f92ffd640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f7f94151da0 0x7f7f94172180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.233+0000 7f7f92ffd640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f7f94151da0 0x7f7f94172180 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36168/0 (socket says 192.168.123.100:36168) 2026-03-20T11:45:40.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.233+0000 7f7f92ffd640 1 -- 192.168.123.100:0/1003548280 learned_addr learned my addr 192.168.123.100:0/1003548280 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:40.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.233+0000 7f7f92ffd640 1 -- 192.168.123.100:0/1003548280 >> v1:192.168.123.100:6789/0 conn(0x7f7f94057710 legacy=0x7f7f94057ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.231 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.233+0000 7f7f92ffd640 1 -- 192.168.123.100:0/1003548280 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7f9405bce0 con 0x7f7f94151da0 2026-03-20T11:45:40.232 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.234+0000 7f7f92ffd640 1 --2- 192.168.123.100:0/1003548280 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f94151da0 0x7f7f94172180 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f7f84009080 tx=0x7f7f8402ee70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=5639c4abfc583cc3 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.232 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.234+0000 7f7f927fc640 1 -- 192.168.123.100:0/1003548280 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f7f8403c070 con 0x7f7f94151da0 2026-03-20T11:45:40.232 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.234+0000 7f7f927fc640 1 -- 192.168.123.100:0/1003548280 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f7f8402fab0 con 0x7f7f94151da0 2026-03-20T11:45:40.232 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.234+0000 7f7f927fc640 1 -- 192.168.123.100:0/1003548280 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f7f8402fdb0 con 0x7f7f94151da0 2026-03-20T11:45:40.232 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.234+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1003548280 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f94151da0 msgr2=0x7f7f94172180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.232 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.234+0000 7f7f9a0d1640 1 --2- 192.168.123.100:0/1003548280 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f94151da0 0x7f7f94172180 secure :-1 s=READY pgs=81 cs=0 l=1 rev1=1 crypto rx=0x7f7f84009080 tx=0x7f7f8402ee70 comp rx=0 tx=0).stop 2026-03-20T11:45:40.232 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.234+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1003548280 shutdown_connections 2026-03-20T11:45:40.232 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.234+0000 7f7f9a0d1640 1 --2- 192.168.123.100:0/1003548280 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f94151da0 0x7f7f94172180 unknown :-1 s=CLOSED pgs=81 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.232 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.234+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1003548280 >> 192.168.123.100:0/1003548280 conn(0x7f7f94082bf0 msgr2=0x7f7f94082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:40.233 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.235+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1003548280 shutdown_connections 2026-03-20T11:45:40.234 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.236+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1003548280 wait complete. 2026-03-20T11:45:40.235 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.237+0000 7f7f9a0d1640 1 Processor -- start 2026-03-20T11:45:40.235 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.237+0000 7f7f9a0d1640 1 -- start start 2026-03-20T11:45:40.235 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.237+0000 7f7f9a0d1640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f94057710 0x7f7f94165830 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.235 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.237+0000 7f7f9a0d1640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f7f9405a360 con 0x7f7f94057710 2026-03-20T11:45:40.236 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.238+0000 7f7f937fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f94057710 0x7f7f94165830 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.236 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.238+0000 7f7f937fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f94057710 0x7f7f94165830 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36170/0 (socket says 192.168.123.100:36170) 2026-03-20T11:45:40.236 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.238+0000 7f7f937fe640 1 -- 192.168.123.100:0/1866586155 learned_addr learned my addr 192.168.123.100:0/1866586155 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:40.236 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.238+0000 7f7f937fe640 1 -- 192.168.123.100:0/1866586155 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f7f94167b20 con 0x7f7f94057710 2026-03-20T11:45:40.236 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.238+0000 7f7f937fe640 1 --2- 192.168.123.100:0/1866586155 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f94057710 0x7f7f94165830 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f7f8000c680 tx=0x7f7f8000cb50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.236 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.238+0000 7f7f6ffff640 1 -- 192.168.123.100:0/1866586155 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f7f80016020 con 0x7f7f94057710 2026-03-20T11:45:40.236 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.238+0000 7f7f6ffff640 1 -- 192.168.123.100:0/1866586155 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f7f80005020 con 0x7f7f94057710 2026-03-20T11:45:40.236 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.238+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1866586155 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f7f94166bc0 con 0x7f7f94057710 2026-03-20T11:45:40.237 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.238+0000 7f7f6ffff640 1 -- 192.168.123.100:0/1866586155 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f7f80016020 con 0x7f7f94057710 2026-03-20T11:45:40.237 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.238+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1866586155 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f7f94172460 con 0x7f7f94057710 2026-03-20T11:45:40.237 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.239+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1866586155 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f7f54005180 con 0x7f7f94057710 2026-03-20T11:45:40.237 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.239+0000 7f7f6ffff640 1 -- 192.168.123.100:0/1866586155 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f7f80005220 con 0x7f7f94057710 2026-03-20T11:45:40.237 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.240+0000 7f7f6ffff640 1 --2- 192.168.123.100:0/1866586155 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f7f6403dd40 0x7f7f6405e1f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:40.238 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.240+0000 7f7f6ffff640 1 -- 192.168.123.100:0/1866586155 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f7f80051690 con 0x7f7f94057710 2026-03-20T11:45:40.238 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.240+0000 7f7f92ffd640 1 --2- 192.168.123.100:0/1866586155 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f7f6403dd40 0x7f7f6405e1f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:40.238 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.240+0000 7f7f92ffd640 1 --2- 192.168.123.100:0/1866586155 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f7f6403dd40 0x7f7f6405e1f0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f7f8402f3a0 tx=0x7f7f84033000 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:40.239 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.242+0000 7f7f6ffff640 1 -- 192.168.123.100:0/1866586155 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f7f80015170 con 0x7f7f94057710 2026-03-20T11:45:40.315 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.316+0000 7f37c89cb640 1 -- 192.168.123.100:0/3215009650 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 2} v 0) -- 0x7f37c005e930 con 0x7f37c0058330 2026-03-20T11:45:40.317 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.319+0000 7f37aaffd640 1 -- 192.168.123.100:0/3215009650 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 2}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7f37b403a5c0 con 0x7f37c0058330 2026-03-20T11:45:40.317 INFO:teuthology.orchestra.run.vm00.stdout:34359738370 2026-03-20T11:45:40.319 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.321+0000 7f37c89cb640 1 -- 192.168.123.100:0/3215009650 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f379003dbe0 msgr2=0x7f379005e090 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.319 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.321+0000 7f37c89cb640 1 --2- 192.168.123.100:0/3215009650 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f379003dbe0 0x7f379005e090 secure :-1 s=READY pgs=23 cs=0 l=1 rev1=1 crypto rx=0x7f37c00590a0 tx=0x7f37ac007b40 comp rx=0 tx=0).stop 2026-03-20T11:45:40.319 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.321+0000 7f37c89cb640 1 -- 192.168.123.100:0/3215009650 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f37c0058330 msgr2=0x7f37c01d14a0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.319 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.321+0000 7f37c89cb640 1 --2- 192.168.123.100:0/3215009650 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f37c0058330 0x7f37c01d14a0 secure :-1 s=READY pgs=78 cs=0 l=1 rev1=1 crypto rx=0x7f37b4008a60 tx=0x7f37b40047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.319 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.321+0000 7f37c89cb640 1 -- 192.168.123.100:0/3215009650 shutdown_connections 2026-03-20T11:45:40.319 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.321+0000 7f37c89cb640 1 --2- 192.168.123.100:0/3215009650 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f379003dbe0 0x7f379005e090 unknown :-1 s=CLOSED pgs=23 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.319 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.321+0000 7f37c89cb640 1 --2- 192.168.123.100:0/3215009650 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f37c0058330 0x7f37c01d14a0 unknown :-1 s=CLOSED pgs=78 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.319 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.321+0000 7f37c89cb640 1 -- 192.168.123.100:0/3215009650 >> 192.168.123.100:0/3215009650 conn(0x7f37c0083010 msgr2=0x7f37c0076700 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:40.319 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.321+0000 7f37c89cb640 1 -- 192.168.123.100:0/3215009650 shutdown_connections 2026-03-20T11:45:40.321 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.323+0000 7f37c89cb640 1 -- 192.168.123.100:0/3215009650 wait complete. 2026-03-20T11:45:40.326 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.328+0000 7f2b9bfff640 1 -- 192.168.123.100:0/2869359326 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) -- 0x7f2b9c10c620 con 0x7f2b9c058da0 2026-03-20T11:45:40.326 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.328+0000 7f2b7b7fe640 1 -- 192.168.123.100:0/2869359326 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7f2b900090a0 con 0x7f2b9c058da0 2026-03-20T11:45:40.327 INFO:teuthology.orchestra.run.vm00.stdout:34359738370 2026-03-20T11:45:40.329 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.331+0000 7f2b9bfff640 1 -- 192.168.123.100:0/2869359326 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f2b6003dc50 msgr2=0x7f2b6005e100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.329 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.331+0000 7f2b9bfff640 1 --2- 192.168.123.100:0/2869359326 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f2b6003dc50 0x7f2b6005e100 secure :-1 s=READY pgs=24 cs=0 l=1 rev1=1 crypto rx=0x7f2b8c02f250 tx=0x7f2b8c033000 comp rx=0 tx=0).stop 2026-03-20T11:45:40.329 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.331+0000 7f2b9bfff640 1 -- 192.168.123.100:0/2869359326 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2b9c058da0 msgr2=0x7f2b9c082c60 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.329 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.331+0000 7f2b9bfff640 1 --2- 192.168.123.100:0/2869359326 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2b9c058da0 0x7f2b9c082c60 secure :-1 s=READY pgs=79 cs=0 l=1 rev1=1 crypto rx=0x7f2b90007c40 tx=0x7f2b9000cb20 comp rx=0 tx=0).stop 2026-03-20T11:45:40.329 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.331+0000 7f2b9bfff640 1 -- 192.168.123.100:0/2869359326 shutdown_connections 2026-03-20T11:45:40.329 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.331+0000 7f2b9bfff640 1 --2- 192.168.123.100:0/2869359326 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f2b6003dc50 0x7f2b6005e100 unknown :-1 s=CLOSED pgs=24 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.329 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.331+0000 7f2b9bfff640 1 --2- 192.168.123.100:0/2869359326 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2b9c058da0 0x7f2b9c082c60 unknown :-1 s=CLOSED pgs=79 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.329 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.331+0000 7f2b9bfff640 1 -- 192.168.123.100:0/2869359326 >> 192.168.123.100:0/2869359326 conn(0x7f2b9c087390 msgr2=0x7f2b9c079340 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:40.329 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.331+0000 7f2b9bfff640 1 -- 192.168.123.100:0/2869359326 shutdown_connections 2026-03-20T11:45:40.329 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.331+0000 7f2b9bfff640 1 -- 192.168.123.100:0/2869359326 wait complete. 2026-03-20T11:45:40.331 INFO:tasks.ceph.ceph_manager.ceph:need seq 34359738371 got 34359738370 for osd.2 2026-03-20T11:45:40.338 INFO:tasks.ceph.ceph_manager.ceph:need seq 34359738371 got 34359738370 for osd.1 2026-03-20T11:45:40.358 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.360+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1866586155 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 0} v 0) -- 0x7f7f54005740 con 0x7f7f94057710 2026-03-20T11:45:40.359 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.361+0000 7f7f6ffff640 1 -- 192.168.123.100:0/1866586155 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 0}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7f7f800153c0 con 0x7f7f94057710 2026-03-20T11:45:40.359 INFO:teuthology.orchestra.run.vm00.stdout:34359738370 2026-03-20T11:45:40.361 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.363+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1866586155 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f7f6403dd40 msgr2=0x7f7f6405e1f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.361 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.363+0000 7f7f9a0d1640 1 --2- 192.168.123.100:0/1866586155 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f7f6403dd40 0x7f7f6405e1f0 secure :-1 s=READY pgs=25 cs=0 l=1 rev1=1 crypto rx=0x7f7f8402f3a0 tx=0x7f7f84033000 comp rx=0 tx=0).stop 2026-03-20T11:45:40.361 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.363+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1866586155 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f94057710 msgr2=0x7f7f94165830 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:40.361 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.363+0000 7f7f9a0d1640 1 --2- 192.168.123.100:0/1866586155 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f94057710 0x7f7f94165830 secure :-1 s=READY pgs=82 cs=0 l=1 rev1=1 crypto rx=0x7f7f8000c680 tx=0x7f7f8000cb50 comp rx=0 tx=0).stop 2026-03-20T11:45:40.361 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.363+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1866586155 shutdown_connections 2026-03-20T11:45:40.361 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.363+0000 7f7f9a0d1640 1 --2- 192.168.123.100:0/1866586155 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f7f6403dd40 0x7f7f6405e1f0 unknown :-1 s=CLOSED pgs=25 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.361 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.363+0000 7f7f9a0d1640 1 --2- 192.168.123.100:0/1866586155 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f7f94057710 0x7f7f94165830 unknown :-1 s=CLOSED pgs=82 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:40.361 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.363+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1866586155 >> 192.168.123.100:0/1866586155 conn(0x7f7f94082bf0 msgr2=0x7f7f94058e10 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:40.361 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.363+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1866586155 shutdown_connections 2026-03-20T11:45:40.361 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:40.363+0000 7f7f9a0d1640 1 -- 192.168.123.100:0/1866586155 wait complete. 2026-03-20T11:45:40.370 INFO:tasks.ceph.ceph_manager.ceph:need seq 34359738371 got 34359738370 for osd.0 2026-03-20T11:45:41.332 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-20T11:45:41.339 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-20T11:45:41.371 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-20T11:45:41.406 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.408+0000 7f834d016640 1 Processor -- start 2026-03-20T11:45:41.406 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.408+0000 7f834d016640 1 -- start start 2026-03-20T11:45:41.406 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.408+0000 7f834d016640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f8348151da0 0x7f8348172180 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:41.406 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.408+0000 7f834d016640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f834805bff0 con 0x7f8348057710 2026-03-20T11:45:41.406 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.408+0000 7f834d016640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f834805b720 con 0x7f8348151da0 2026-03-20T11:45:41.406 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.408+0000 7f8346575640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f8348151da0 0x7f8348172180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:41.406 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.408+0000 7f8346575640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f8348151da0 0x7f8348172180 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36184/0 (socket says 192.168.123.100:36184) 2026-03-20T11:45:41.406 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.408+0000 7f8346575640 1 -- 192.168.123.100:0/4186516337 learned_addr learned my addr 192.168.123.100:0/4186516337 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:41.407 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.408+0000 7f8346575640 1 -- 192.168.123.100:0/4186516337 >> v1:192.168.123.100:6789/0 conn(0x7f8348057710 legacy=0x7f8348057ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:41.407 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.408+0000 7f8346575640 1 -- 192.168.123.100:0/4186516337 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f834805bce0 con 0x7f8348151da0 2026-03-20T11:45:41.407 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.409+0000 7f8346575640 1 --2- 192.168.123.100:0/4186516337 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8348151da0 0x7f8348172180 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7f833c009080 tx=0x7f833c02ee70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=2a4acb6e7fb9081f server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:41.407 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.409+0000 7f8345d74640 1 -- 192.168.123.100:0/4186516337 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f833c03c070 con 0x7f8348151da0 2026-03-20T11:45:41.407 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.409+0000 7f8345d74640 1 -- 192.168.123.100:0/4186516337 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f833c02fab0 con 0x7f8348151da0 2026-03-20T11:45:41.407 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.409+0000 7f8345d74640 1 -- 192.168.123.100:0/4186516337 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f833c02fdb0 con 0x7f8348151da0 2026-03-20T11:45:41.407 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.409+0000 7f834d016640 1 -- 192.168.123.100:0/4186516337 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8348151da0 msgr2=0x7f8348172180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:41.407 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.409+0000 7f834d016640 1 --2- 192.168.123.100:0/4186516337 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8348151da0 0x7f8348172180 secure :-1 s=READY pgs=84 cs=0 l=1 rev1=1 crypto rx=0x7f833c009080 tx=0x7f833c02ee70 comp rx=0 tx=0).stop 2026-03-20T11:45:41.407 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.410+0000 7f834d016640 1 -- 192.168.123.100:0/4186516337 shutdown_connections 2026-03-20T11:45:41.408 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.410+0000 7f834d016640 1 --2- 192.168.123.100:0/4186516337 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8348151da0 0x7f8348172180 unknown :-1 s=CLOSED pgs=84 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:41.408 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.410+0000 7f834d016640 1 -- 192.168.123.100:0/4186516337 >> 192.168.123.100:0/4186516337 conn(0x7f8348082bf0 msgr2=0x7f8348082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:41.409 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.412+0000 7f834d016640 1 -- 192.168.123.100:0/4186516337 shutdown_connections 2026-03-20T11:45:41.409 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.412+0000 7f834d016640 1 -- 192.168.123.100:0/4186516337 wait complete. 2026-03-20T11:45:41.410 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.412+0000 7f834d016640 1 Processor -- start 2026-03-20T11:45:41.410 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.412+0000 7f834d016640 1 -- start start 2026-03-20T11:45:41.410 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.412+0000 7f834d016640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8348057710 0x7f834813e750 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:41.410 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.412+0000 7f834d016640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f8348058c30 con 0x7f8348057710 2026-03-20T11:45:41.410 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.412+0000 7f8346d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8348057710 0x7f834813e750 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:41.410 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.412+0000 7f8346d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8348057710 0x7f834813e750 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36194/0 (socket says 192.168.123.100:36194) 2026-03-20T11:45:41.410 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.412+0000 7f8346d76640 1 -- 192.168.123.100:0/772182459 learned_addr learned my addr 192.168.123.100:0/772182459 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:41.411 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.413+0000 7f8346d76640 1 -- 192.168.123.100:0/772182459 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f834807c040 con 0x7f8348057710 2026-03-20T11:45:41.411 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.413+0000 7f8346d76640 1 --2- 192.168.123.100:0/772182459 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8348057710 0x7f834813e750 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f833000c450 tx=0x7f833000c920 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:41.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.413+0000 7f83277fe640 1 -- 192.168.123.100:0/772182459 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f8330016020 con 0x7f8348057710 2026-03-20T11:45:41.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.413+0000 7f83277fe640 1 -- 192.168.123.100:0/772182459 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f8330005150 con 0x7f8348057710 2026-03-20T11:45:41.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.413+0000 7f83277fe640 1 -- 192.168.123.100:0/772182459 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f8330005430 con 0x7f8348057710 2026-03-20T11:45:41.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.413+0000 7f834d016640 1 -- 192.168.123.100:0/772182459 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f834807c2e0 con 0x7f8348057710 2026-03-20T11:45:41.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.413+0000 7f834d016640 1 -- 192.168.123.100:0/772182459 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f834813ec90 con 0x7f8348057710 2026-03-20T11:45:41.412 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.414+0000 7f834d016640 1 -- 192.168.123.100:0/772182459 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f834807bb10 con 0x7f8348057710 2026-03-20T11:45:41.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.414+0000 7f83277fe640 1 -- 192.168.123.100:0/772182459 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f83300068c0 con 0x7f8348057710 2026-03-20T11:45:41.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.415+0000 7f83277fe640 1 --2- 192.168.123.100:0/772182459 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f831403dc50 0x7f831405e100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:41.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.415+0000 7f83277fe640 1 -- 192.168.123.100:0/772182459 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f8330052730 con 0x7f8348057710 2026-03-20T11:45:41.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.415+0000 7f8346575640 1 --2- 192.168.123.100:0/772182459 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f831403dc50 0x7f831405e100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:41.413 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.415+0000 7f8346575640 1 --2- 192.168.123.100:0/772182459 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f831403dc50 0x7f831405e100 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f833c009670 tx=0x7f833c033000 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:41.415 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.417+0000 7f83277fe640 1 -- 192.168.123.100:0/772182459 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f834807bb10 con 0x7f8348057710 2026-03-20T11:45:41.449 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.451+0000 7f92a53eb640 1 Processor -- start 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.451+0000 7f8449379640 1 Processor -- start 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.451+0000 7f8449379640 1 -- start start 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.451+0000 7f92a53eb640 1 -- start start 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f92a53eb640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f92a0151da0 0x7f92a0172180 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f92a53eb640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f92a005bff0 con 0x7f92a0057710 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f92a53eb640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f92a005b720 con 0x7f92a0151da0 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.451+0000 7f8449379640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f8444158580 0x7f8444178960 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.451+0000 7f8449379640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f8444059a90 con 0x7f8444058330 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.451+0000 7f8449379640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f844405c0f0 con 0x7f8444158580 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f929e7fc640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f92a0151da0 0x7f92a0172180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f929e7fc640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f92a0151da0 0x7f92a0172180 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36202/0 (socket says 192.168.123.100:36202) 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f929e7fc640 1 -- 192.168.123.100:0/2356017763 learned_addr learned my addr 192.168.123.100:0/2356017763 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f84427fc640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f8444158580 0x7f8444178960 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f84427fc640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f8444158580 0x7f8444178960 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36200/0 (socket says 192.168.123.100:36200) 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f84427fc640 1 -- 192.168.123.100:0/3548106352 learned_addr learned my addr 192.168.123.100:0/3548106352 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f8442ffd640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7f8444058330 0x7f8444058700 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:33026/0 (socket says 192.168.123.100:33026) 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f929e7fc640 1 -- 192.168.123.100:0/2356017763 >> v1:192.168.123.100:6789/0 conn(0x7f92a0057710 legacy=0x7f92a0057ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f929e7fc640 1 -- 192.168.123.100:0/2356017763 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f92a005bce0 con 0x7f92a0151da0 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f929dffb640 1 -- 192.168.123.100:0/2356017763 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 63117812 0 0) 0x7f92a005bff0 con 0x7f92a0057710 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f84427fc640 1 -- 192.168.123.100:0/3548106352 >> v1:192.168.123.100:6789/0 conn(0x7f8444058330 legacy=0x7f8444058700 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:41.450 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f84427fc640 1 -- 192.168.123.100:0/3548106352 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f844405cde0 con 0x7f8444158580 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f929e7fc640 1 --2- 192.168.123.100:0/2356017763 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f92a0151da0 0x7f92a0172180 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f9288009080 tx=0x7f928802f0d0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=c17cbdd9c3388454 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.452+0000 7f84427fc640 1 --2- 192.168.123.100:0/3548106352 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8444158580 0x7f8444178960 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f8434009080 tx=0x7f843402ed30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=f9d6cc7922bd9d6 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f8441ffb640 1 -- 192.168.123.100:0/3548106352 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f843403c070 con 0x7f8444158580 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f8441ffb640 1 -- 192.168.123.100:0/3548106352 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f843402f8b0 con 0x7f8444158580 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f8441ffb640 1 -- 192.168.123.100:0/3548106352 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f843402fbb0 con 0x7f8444158580 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f929dffb640 1 -- 192.168.123.100:0/2356017763 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f928803c070 con 0x7f92a0151da0 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f929dffb640 1 -- 192.168.123.100:0/2356017763 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f928802fb00 con 0x7f92a0151da0 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f929dffb640 1 -- 192.168.123.100:0/2356017763 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f928802fe00 con 0x7f92a0151da0 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f8449379640 1 -- 192.168.123.100:0/3548106352 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8444158580 msgr2=0x7f8444178960 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f8449379640 1 --2- 192.168.123.100:0/3548106352 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8444158580 0x7f8444178960 secure :-1 s=READY pgs=88 cs=0 l=1 rev1=1 crypto rx=0x7f8434009080 tx=0x7f843402ed30 comp rx=0 tx=0).stop 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f92a53eb640 1 -- 192.168.123.100:0/2356017763 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f92a0151da0 msgr2=0x7f92a0172180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f92a53eb640 1 --2- 192.168.123.100:0/2356017763 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f92a0151da0 0x7f92a0172180 secure :-1 s=READY pgs=89 cs=0 l=1 rev1=1 crypto rx=0x7f9288009080 tx=0x7f928802f0d0 comp rx=0 tx=0).stop 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f8449379640 1 -- 192.168.123.100:0/3548106352 shutdown_connections 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f8449379640 1 --2- 192.168.123.100:0/3548106352 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8444158580 0x7f8444178960 unknown :-1 s=CLOSED pgs=88 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f8449379640 1 -- 192.168.123.100:0/3548106352 >> 192.168.123.100:0/3548106352 conn(0x7f8444083010 msgr2=0x7f844407f510 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f8449379640 1 -- 192.168.123.100:0/3548106352 shutdown_connections 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f8449379640 1 -- 192.168.123.100:0/3548106352 wait complete. 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f92a53eb640 1 -- 192.168.123.100:0/2356017763 shutdown_connections 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f92a53eb640 1 --2- 192.168.123.100:0/2356017763 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f92a0151da0 0x7f92a0172180 unknown :-1 s=CLOSED pgs=89 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f92a53eb640 1 -- 192.168.123.100:0/2356017763 >> 192.168.123.100:0/2356017763 conn(0x7f92a0082bf0 msgr2=0x7f92a0082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f92a53eb640 1 -- 192.168.123.100:0/2356017763 shutdown_connections 2026-03-20T11:45:41.451 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.453+0000 7f92a53eb640 1 -- 192.168.123.100:0/2356017763 wait complete. 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f92a53eb640 1 Processor -- start 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f8449379640 1 Processor -- start 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f8449379640 1 -- start start 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f92a53eb640 1 -- start start 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f92a53eb640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f92a0057710 0x7f92a0155180 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f92a53eb640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f92a0172840 con 0x7f92a0057710 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f8449379640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8444058330 0x7f8444157e70 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f8449379640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f8444178ea0 con 0x7f8444058330 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f8442ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8444058330 0x7f8444157e70 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f8442ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8444058330 0x7f8444157e70 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36234/0 (socket says 192.168.123.100:36234) 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f8442ffd640 1 -- 192.168.123.100:0/2013205276 learned_addr learned my addr 192.168.123.100:0/2013205276 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f929effd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f92a0057710 0x7f92a0155180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f929effd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f92a0057710 0x7f92a0155180 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36218/0 (socket says 192.168.123.100:36218) 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f929effd640 1 -- 192.168.123.100:0/339599148 learned_addr learned my addr 192.168.123.100:0/339599148 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f929effd640 1 -- 192.168.123.100:0/339599148 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f92a0157510 con 0x7f92a0057710 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f8442ffd640 1 -- 192.168.123.100:0/2013205276 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f84441583b0 con 0x7f8444058330 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.454+0000 7f8442ffd640 1 --2- 192.168.123.100:0/2013205276 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8444058330 0x7f8444157e70 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f843800c680 tx=0x7f843800cb50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:41.452 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.455+0000 7f929effd640 1 --2- 192.168.123.100:0/339599148 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f92a0057710 0x7f92a0155180 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f929400c430 tx=0x7f929400c900 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:41.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.455+0000 7f84337fe640 1 -- 192.168.123.100:0/2013205276 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f8438018070 con 0x7f8444058330 2026-03-20T11:45:41.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.455+0000 7f8449379640 1 -- 192.168.123.100:0/2013205276 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f844407efd0 con 0x7f8444058330 2026-03-20T11:45:41.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.455+0000 7f8449379640 1 -- 192.168.123.100:0/2013205276 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f844407dba0 con 0x7f8444058330 2026-03-20T11:45:41.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.455+0000 7f84337fe640 1 -- 192.168.123.100:0/2013205276 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f8438004ae0 con 0x7f8444058330 2026-03-20T11:45:41.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.455+0000 7f84337fe640 1 -- 192.168.123.100:0/2013205276 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f8438004dc0 con 0x7f8444058330 2026-03-20T11:45:41.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.455+0000 7f927f7fe640 1 -- 192.168.123.100:0/339599148 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f92940053b0 con 0x7f92a0057710 2026-03-20T11:45:41.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.455+0000 7f8449379640 1 -- 192.168.123.100:0/2013205276 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f84440578f0 con 0x7f8444058330 2026-03-20T11:45:41.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.455+0000 7f927f7fe640 1 -- 192.168.123.100:0/339599148 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f9294005550 con 0x7f92a0057710 2026-03-20T11:45:41.453 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.455+0000 7f92a53eb640 1 -- 192.168.123.100:0/339599148 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f92a01565b0 con 0x7f92a0057710 2026-03-20T11:45:41.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.456+0000 7f927f7fe640 1 -- 192.168.123.100:0/339599148 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f9294003810 con 0x7f92a0057710 2026-03-20T11:45:41.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.456+0000 7f92a53eb640 1 -- 192.168.123.100:0/339599148 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f92a0157a60 con 0x7f92a0057710 2026-03-20T11:45:41.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.456+0000 7f927f7fe640 1 -- 192.168.123.100:0/339599148 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f92940039b0 con 0x7f92a0057710 2026-03-20T11:45:41.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.456+0000 7f927f7fe640 1 --2- 192.168.123.100:0/339599148 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f926c03db90 0x7f926c05e040 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:41.454 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.456+0000 7f92a53eb640 1 -- 192.168.123.100:0/339599148 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f92a0156c90 con 0x7f92a0057710 2026-03-20T11:45:41.456 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.458+0000 7f927f7fe640 1 -- 192.168.123.100:0/339599148 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f9294050670 con 0x7f92a0057710 2026-03-20T11:45:41.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.459+0000 7f929e7fc640 1 --2- 192.168.123.100:0/339599148 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f926c03db90 0x7f926c05e040 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:41.457 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.459+0000 7f929e7fc640 1 --2- 192.168.123.100:0/339599148 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f926c03db90 0x7f926c05e040 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f92880170a0 tx=0x7f9288033000 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:41.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.456+0000 7f84337fe640 1 -- 192.168.123.100:0/2013205276 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f84380070d0 con 0x7f8444058330 2026-03-20T11:45:41.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.456+0000 7f84337fe640 1 --2- 192.168.123.100:0/2013205276 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f841003dc00 0x7f841005e0b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:41.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.457+0000 7f84337fe640 1 -- 192.168.123.100:0/2013205276 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f8438051d80 con 0x7f8444058330 2026-03-20T11:45:41.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.457+0000 7f84427fc640 1 --2- 192.168.123.100:0/2013205276 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f841003dc00 0x7f841005e0b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:41.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.457+0000 7f84427fc640 1 --2- 192.168.123.100:0/2013205276 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f841003dc00 0x7f841005e0b0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f8434009670 tx=0x7f8434033000 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:41.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.458+0000 7f84337fe640 1 -- 192.168.123.100:0/2013205276 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f84440578f0 con 0x7f8444058330 2026-03-20T11:45:41.458 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.460+0000 7f927f7fe640 1 -- 192.168.123.100:0/339599148 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f92940092a0 con 0x7f92a0057710 2026-03-20T11:45:41.534 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.536+0000 7f834d016640 1 -- 192.168.123.100:0/772182459 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 2} v 0) -- 0x7f8348057ae0 con 0x7f8348057710 2026-03-20T11:45:41.535 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.536+0000 7f83277fe640 1 -- 192.168.123.100:0/772182459 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 2}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7f83300201f0 con 0x7f8348057710 2026-03-20T11:45:41.535 INFO:teuthology.orchestra.run.vm00.stdout:34359738370 2026-03-20T11:45:41.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.539+0000 7f834d016640 1 -- 192.168.123.100:0/772182459 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f831403dc50 msgr2=0x7f831405e100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:41.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.539+0000 7f834d016640 1 --2- 192.168.123.100:0/772182459 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f831403dc50 0x7f831405e100 secure :-1 s=READY pgs=26 cs=0 l=1 rev1=1 crypto rx=0x7f833c009670 tx=0x7f833c033000 comp rx=0 tx=0).stop 2026-03-20T11:45:41.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.539+0000 7f834d016640 1 -- 192.168.123.100:0/772182459 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8348057710 msgr2=0x7f834813e750 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:41.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.539+0000 7f834d016640 1 --2- 192.168.123.100:0/772182459 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8348057710 0x7f834813e750 secure :-1 s=READY pgs=85 cs=0 l=1 rev1=1 crypto rx=0x7f833000c450 tx=0x7f833000c920 comp rx=0 tx=0).stop 2026-03-20T11:45:41.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.539+0000 7f834d016640 1 -- 192.168.123.100:0/772182459 shutdown_connections 2026-03-20T11:45:41.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.539+0000 7f834d016640 1 --2- 192.168.123.100:0/772182459 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f831403dc50 0x7f831405e100 unknown :-1 s=CLOSED pgs=26 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:41.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.539+0000 7f834d016640 1 --2- 192.168.123.100:0/772182459 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8348057710 0x7f834813e750 unknown :-1 s=CLOSED pgs=85 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:41.537 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.539+0000 7f834d016640 1 -- 192.168.123.100:0/772182459 >> 192.168.123.100:0/772182459 conn(0x7f8348082bf0 msgr2=0x7f8348074680 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:41.538 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.540+0000 7f834d016640 1 -- 192.168.123.100:0/772182459 shutdown_connections 2026-03-20T11:45:41.538 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.540+0000 7f834d016640 1 -- 192.168.123.100:0/772182459 wait complete. 2026-03-20T11:45:41.547 INFO:tasks.ceph.ceph_manager.ceph:need seq 34359738371 got 34359738370 for osd.2 2026-03-20T11:45:41.575 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.577+0000 7f92a53eb640 1 -- 192.168.123.100:0/339599148 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 0} v 0) -- 0x7f92a0057ae0 con 0x7f92a0057710 2026-03-20T11:45:41.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.578+0000 7f927f7fe640 1 -- 192.168.123.100:0/339599148 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 0}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7f9294010d80 con 0x7f92a0057710 2026-03-20T11:45:41.576 INFO:teuthology.orchestra.run.vm00.stdout:34359738370 2026-03-20T11:45:41.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.581+0000 7f8449379640 1 -- 192.168.123.100:0/2013205276 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) -- 0x7f8444058700 con 0x7f8444058330 2026-03-20T11:45:41.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.581+0000 7f92a53eb640 1 -- 192.168.123.100:0/339599148 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f926c03db90 msgr2=0x7f926c05e040 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:41.579 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.581+0000 7f92a53eb640 1 --2- 192.168.123.100:0/339599148 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f926c03db90 0x7f926c05e040 secure :-1 s=READY pgs=28 cs=0 l=1 rev1=1 crypto rx=0x7f92880170a0 tx=0x7f9288033000 comp rx=0 tx=0).stop 2026-03-20T11:45:41.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.581+0000 7f92a53eb640 1 -- 192.168.123.100:0/339599148 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f92a0057710 msgr2=0x7f92a0155180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:41.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.581+0000 7f92a53eb640 1 --2- 192.168.123.100:0/339599148 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f92a0057710 0x7f92a0155180 secure :-1 s=READY pgs=91 cs=0 l=1 rev1=1 crypto rx=0x7f929400c430 tx=0x7f929400c900 comp rx=0 tx=0).stop 2026-03-20T11:45:41.580 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.581+0000 7f84337fe640 1 -- 192.168.123.100:0/2013205276 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7f8438010300 con 0x7f8444058330 2026-03-20T11:45:41.580 INFO:teuthology.orchestra.run.vm00.stdout:34359738370 2026-03-20T11:45:41.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.583+0000 7f92a53eb640 1 -- 192.168.123.100:0/339599148 shutdown_connections 2026-03-20T11:45:41.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.583+0000 7f92a53eb640 1 --2- 192.168.123.100:0/339599148 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f926c03db90 0x7f926c05e040 unknown :-1 s=CLOSED pgs=28 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:41.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.583+0000 7f92a53eb640 1 --2- 192.168.123.100:0/339599148 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f92a0057710 0x7f92a0155180 unknown :-1 s=CLOSED pgs=91 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:41.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.583+0000 7f92a53eb640 1 -- 192.168.123.100:0/339599148 >> 192.168.123.100:0/339599148 conn(0x7f92a0082bf0 msgr2=0x7f92a0058ab0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:41.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.583+0000 7f92a53eb640 1 -- 192.168.123.100:0/339599148 shutdown_connections 2026-03-20T11:45:41.581 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.583+0000 7f92a53eb640 1 -- 192.168.123.100:0/339599148 wait complete. 2026-03-20T11:45:41.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.584+0000 7f8449379640 1 -- 192.168.123.100:0/2013205276 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f841003dc00 msgr2=0x7f841005e0b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:41.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.584+0000 7f8449379640 1 --2- 192.168.123.100:0/2013205276 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f841003dc00 0x7f841005e0b0 secure :-1 s=READY pgs=27 cs=0 l=1 rev1=1 crypto rx=0x7f8434009670 tx=0x7f8434033000 comp rx=0 tx=0).stop 2026-03-20T11:45:41.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.584+0000 7f8449379640 1 -- 192.168.123.100:0/2013205276 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8444058330 msgr2=0x7f8444157e70 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:41.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.584+0000 7f8449379640 1 --2- 192.168.123.100:0/2013205276 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8444058330 0x7f8444157e70 secure :-1 s=READY pgs=90 cs=0 l=1 rev1=1 crypto rx=0x7f843800c680 tx=0x7f843800cb50 comp rx=0 tx=0).stop 2026-03-20T11:45:41.582 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.584+0000 7f8449379640 1 -- 192.168.123.100:0/2013205276 shutdown_connections 2026-03-20T11:45:41.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.584+0000 7f8449379640 1 --2- 192.168.123.100:0/2013205276 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f841003dc00 0x7f841005e0b0 unknown :-1 s=CLOSED pgs=27 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:41.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.584+0000 7f8449379640 1 --2- 192.168.123.100:0/2013205276 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f8444058330 0x7f8444157e70 unknown :-1 s=CLOSED pgs=90 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:41.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.584+0000 7f8449379640 1 -- 192.168.123.100:0/2013205276 >> 192.168.123.100:0/2013205276 conn(0x7f8444083010 msgr2=0x7f8444076040 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:41.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.584+0000 7f8449379640 1 -- 192.168.123.100:0/2013205276 shutdown_connections 2026-03-20T11:45:41.583 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:41.584+0000 7f8449379640 1 -- 192.168.123.100:0/2013205276 wait complete. 2026-03-20T11:45:41.592 INFO:tasks.ceph.ceph_manager.ceph:need seq 34359738371 got 34359738370 for osd.0 2026-03-20T11:45:41.592 INFO:tasks.ceph.ceph_manager.ceph:need seq 34359738371 got 34359738370 for osd.1 2026-03-20T11:45:42.547 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.2 2026-03-20T11:45:42.592 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.0 2026-03-20T11:45:42.592 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd last-stat-seq osd.1 2026-03-20T11:45:42.618 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.619+0000 7f6b5a37f640 1 Processor -- start 2026-03-20T11:45:42.618 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.619+0000 7f6b5a37f640 1 -- start start 2026-03-20T11:45:42.618 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.619+0000 7f6b5a37f640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f6b5407e960 0x7f6b5407ed30 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:42.618 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.619+0000 7f6b5a37f640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f6b5405a1c0 con 0x7f6b5404c270 2026-03-20T11:45:42.618 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.619+0000 7f6b5a37f640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f6b54059f90 con 0x7f6b5407e960 2026-03-20T11:45:42.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.620+0000 7f6b53fff640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f6b5407e960 0x7f6b5407ed30 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:42.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.620+0000 7f6b53fff640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f6b5407e960 0x7f6b5407ed30 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36246/0 (socket says 192.168.123.100:36246) 2026-03-20T11:45:42.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.620+0000 7f6b53fff640 1 -- 192.168.123.100:0/1456985646 learned_addr learned my addr 192.168.123.100:0/1456985646 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:42.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.620+0000 7f6b53fff640 1 -- 192.168.123.100:0/1456985646 >> v1:192.168.123.100:6789/0 conn(0x7f6b5404c270 legacy=0x7f6b54057e20 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.620+0000 7f6b53fff640 1 -- 192.168.123.100:0/1456985646 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6b5407f270 con 0x7f6b5407e960 2026-03-20T11:45:42.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.621+0000 7f6b53fff640 1 --2- 192.168.123.100:0/1456985646 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b5407e960 0x7f6b5407ed30 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f6b54058610 tx=0x7f6b4402ed30 comp rx=0 tx=0).ready entity=mon.0 client_cookie=fdb86556332c64e5 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:42.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.621+0000 7f6b52ffd640 1 -- 192.168.123.100:0/1456985646 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6b4403c070 con 0x7f6b5407e960 2026-03-20T11:45:42.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.621+0000 7f6b52ffd640 1 -- 192.168.123.100:0/1456985646 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f6b4402f8b0 con 0x7f6b5407e960 2026-03-20T11:45:42.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.621+0000 7f6b52ffd640 1 -- 192.168.123.100:0/1456985646 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6b4402fbb0 con 0x7f6b5407e960 2026-03-20T11:45:42.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.621+0000 7f6b5a37f640 1 -- 192.168.123.100:0/1456985646 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b5407e960 msgr2=0x7f6b5407ed30 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.621+0000 7f6b5a37f640 1 --2- 192.168.123.100:0/1456985646 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b5407e960 0x7f6b5407ed30 secure :-1 s=READY pgs=93 cs=0 l=1 rev1=1 crypto rx=0x7f6b54058610 tx=0x7f6b4402ed30 comp rx=0 tx=0).stop 2026-03-20T11:45:42.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.621+0000 7f6b5a37f640 1 -- 192.168.123.100:0/1456985646 shutdown_connections 2026-03-20T11:45:42.619 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.621+0000 7f6b5a37f640 1 --2- 192.168.123.100:0/1456985646 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b5407e960 0x7f6b5407ed30 unknown :-1 s=CLOSED pgs=93 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.620 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.621+0000 7f6b5a37f640 1 -- 192.168.123.100:0/1456985646 >> 192.168.123.100:0/1456985646 conn(0x7f6b54082bf0 msgr2=0x7f6b54082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:42.620 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.621+0000 7f6b5a37f640 1 -- 192.168.123.100:0/1456985646 shutdown_connections 2026-03-20T11:45:42.620 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.621+0000 7f6b5a37f640 1 -- 192.168.123.100:0/1456985646 wait complete. 2026-03-20T11:45:42.621 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.622+0000 7f6b5a37f640 1 Processor -- start 2026-03-20T11:45:42.621 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.622+0000 7f6b5a37f640 1 -- start start 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.622+0000 7f6b5a37f640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b5404c270 0x7f6b5411aaa0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.622+0000 7f6b5a37f640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f6b54057570 con 0x7f6b5404c270 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.622+0000 7f6b53fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b5404c270 0x7f6b5411aaa0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.622+0000 7f6b53fff640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b5404c270 0x7f6b5411aaa0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36248/0 (socket says 192.168.123.100:36248) 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.622+0000 7f6b53fff640 1 -- 192.168.123.100:0/3808990897 learned_addr learned my addr 192.168.123.100:0/3808990897 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.623+0000 7f6b53fff640 1 -- 192.168.123.100:0/3808990897 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f6b54174330 con 0x7f6b5404c270 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.623+0000 7f6b53fff640 1 --2- 192.168.123.100:0/3808990897 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b5404c270 0x7f6b5411aaa0 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f6b44004770 tx=0x7f6b440047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.623+0000 7f6b50ff9640 1 -- 192.168.123.100:0/3808990897 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6b4403c040 con 0x7f6b5404c270 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.623+0000 7f6b50ff9640 1 -- 192.168.123.100:0/3808990897 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f6b44037aa0 con 0x7f6b5404c270 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.623+0000 7f6b50ff9640 1 -- 192.168.123.100:0/3808990897 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f6b44037d80 con 0x7f6b5404c270 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.623+0000 7f6b5a37f640 1 -- 192.168.123.100:0/3808990897 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f6b5411a4a0 con 0x7f6b5404c270 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.623+0000 7f6b5a37f640 1 -- 192.168.123.100:0/3808990897 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f6b5411b1c0 con 0x7f6b5404c270 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.624+0000 7f6b50ff9640 1 -- 192.168.123.100:0/3808990897 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f6b44004030 con 0x7f6b5404c270 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.624+0000 7f6b50ff9640 1 --2- 192.168.123.100:0/3808990897 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f6b2403dc50 0x7f6b2405e100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.624+0000 7f6b50ff9640 1 -- 192.168.123.100:0/3808990897 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f6b440779a0 con 0x7f6b5404c270 2026-03-20T11:45:42.622 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.624+0000 7f6b537fe640 1 --2- 192.168.123.100:0/3808990897 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f6b2403dc50 0x7f6b2405e100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:42.623 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.625+0000 7f6b537fe640 1 --2- 192.168.123.100:0/3808990897 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f6b2403dc50 0x7f6b2405e100 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f6b40002910 tx=0x7f6b40007b20 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:42.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.625+0000 7f6b5a37f640 1 -- 192.168.123.100:0/3808990897 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f6b14005180 con 0x7f6b5404c270 2026-03-20T11:45:42.626 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.628+0000 7f6b50ff9640 1 -- 192.168.123.100:0/3808990897 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f6b4403cc50 con 0x7f6b5404c270 2026-03-20T11:45:42.667 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.669+0000 7f4450ec9640 1 Processor -- start 2026-03-20T11:45:42.667 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.669+0000 7f4450ec9640 1 -- start start 2026-03-20T11:45:42.667 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.669+0000 7f4450ec9640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f444c057710 0x7f444c057ae0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:42.667 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.669+0000 7f4450ec9640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f444c05bff0 con 0x7f444c058020 2026-03-20T11:45:42.667 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.669+0000 7f4450ec9640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f444c05b720 con 0x7f444c057710 2026-03-20T11:45:42.667 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.669+0000 7f444a575640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f444c057710 0x7f444c057ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:42.667 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.669+0000 7f444a575640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f444c057710 0x7f444c057ae0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36252/0 (socket says 192.168.123.100:36252) 2026-03-20T11:45:42.667 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.669+0000 7f444a575640 1 -- 192.168.123.100:0/3297357302 learned_addr learned my addr 192.168.123.100:0/3297357302 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:42.667 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.669+0000 7f444a575640 1 -- 192.168.123.100:0/3297357302 >> v1:192.168.123.100:6789/0 conn(0x7f444c058020 legacy=0x7f444c07e420 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.667 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.669+0000 7f444a575640 1 -- 192.168.123.100:0/3297357302 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f444c05bce0 con 0x7f444c057710 2026-03-20T11:45:42.668 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.670+0000 7f444a575640 1 --2- 192.168.123.100:0/3297357302 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f444c057710 0x7f444c057ae0 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7f443800d590 tx=0x7f44380332f0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=82e8e35283a73f2c server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:42.668 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.670+0000 7f4449573640 1 -- 192.168.123.100:0/3297357302 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f443800d0d0 con 0x7f444c057710 2026-03-20T11:45:42.668 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.670+0000 7f4449573640 1 -- 192.168.123.100:0/3297357302 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f443800d270 con 0x7f444c057710 2026-03-20T11:45:42.668 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.670+0000 7f4449573640 1 -- 192.168.123.100:0/3297357302 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f443800b550 con 0x7f444c057710 2026-03-20T11:45:42.668 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.670+0000 7f4450ec9640 1 -- 192.168.123.100:0/3297357302 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f444c057710 msgr2=0x7f444c057ae0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.668 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.670+0000 7f4450ec9640 1 --2- 192.168.123.100:0/3297357302 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f444c057710 0x7f444c057ae0 secure :-1 s=READY pgs=96 cs=0 l=1 rev1=1 crypto rx=0x7f443800d590 tx=0x7f44380332f0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.668 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.670+0000 7f4450ec9640 1 -- 192.168.123.100:0/3297357302 shutdown_connections 2026-03-20T11:45:42.668 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.670+0000 7f4450ec9640 1 --2- 192.168.123.100:0/3297357302 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f444c057710 0x7f444c057ae0 unknown :-1 s=CLOSED pgs=96 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.668 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.670+0000 7f4450ec9640 1 -- 192.168.123.100:0/3297357302 >> 192.168.123.100:0/3297357302 conn(0x7f444c082bf0 msgr2=0x7f444c082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:42.668 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.670+0000 7f4450ec9640 1 -- 192.168.123.100:0/3297357302 shutdown_connections 2026-03-20T11:45:42.668 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.670+0000 7f4450ec9640 1 -- 192.168.123.100:0/3297357302 wait complete. 2026-03-20T11:45:42.669 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.671+0000 7f4450ec9640 1 Processor -- start 2026-03-20T11:45:42.669 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.671+0000 7f4450ec9640 1 -- start start 2026-03-20T11:45:42.669 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.671+0000 7f58b8e01640 1 Processor -- start 2026-03-20T11:45:42.669 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.671+0000 7f58b8e01640 1 -- start start 2026-03-20T11:45:42.669 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.671+0000 7f4450ec9640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f444c057710 0x7f444c1aecd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:42.669 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.671+0000 7f4450ec9640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f444c07eec0 con 0x7f444c057710 2026-03-20T11:45:42.669 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.671+0000 7f444a575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f444c057710 0x7f444c1aecd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:42.669 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.671+0000 7f444a575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f444c057710 0x7f444c1aecd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36256/0 (socket says 192.168.123.100:36256) 2026-03-20T11:45:42.669 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.671+0000 7f444a575640 1 -- 192.168.123.100:0/1079307273 learned_addr learned my addr 192.168.123.100:0/1079307273 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:42.669 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.671+0000 7f444a575640 1 -- 192.168.123.100:0/1079307273 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f444c1b0aa0 con 0x7f444c057710 2026-03-20T11:45:42.669 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.672+0000 7f444a575640 1 --2- 192.168.123.100:0/1079307273 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f444c057710 0x7f444c1aecd0 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f4438004770 tx=0x7f44380047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:42.671 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.672+0000 7f58b8e01640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f58b4057710 0x7f58b4057ae0 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:42.671 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.672+0000 7f58b2575640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f58b4057710 0x7f58b4057ae0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:42.671 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.672+0000 7f58b2575640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f58b4057710 0x7f58b4057ae0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36270/0 (socket says 192.168.123.100:36270) 2026-03-20T11:45:42.671 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.672+0000 7f4436ffd640 1 -- 192.168.123.100:0/1079307273 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f443804a030 con 0x7f444c057710 2026-03-20T11:45:42.671 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.672+0000 7f4436ffd640 1 -- 192.168.123.100:0/1079307273 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f443803b9d0 con 0x7f444c057710 2026-03-20T11:45:42.671 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.672+0000 7f4436ffd640 1 -- 192.168.123.100:0/1079307273 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f443800d0d0 con 0x7f444c057710 2026-03-20T11:45:42.671 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.672+0000 7f4450ec9640 1 -- 192.168.123.100:0/1079307273 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f444c1ae730 con 0x7f444c057710 2026-03-20T11:45:42.671 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.672+0000 7f4450ec9640 1 -- 192.168.123.100:0/1079307273 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f444c1af3f0 con 0x7f444c057710 2026-03-20T11:45:42.671 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.672+0000 7f4436ffd640 1 -- 192.168.123.100:0/1079307273 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f443800d270 con 0x7f444c057710 2026-03-20T11:45:42.671 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.673+0000 7f4436ffd640 1 --2- 192.168.123.100:0/1079307273 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f442003dbb0 0x7f442005e060 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:42.671 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.673+0000 7f4436ffd640 1 -- 192.168.123.100:0/1079307273 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f443804b070 con 0x7f444c057710 2026-03-20T11:45:42.671 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.673+0000 7f4449d74640 1 --2- 192.168.123.100:0/1079307273 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f442003dbb0 0x7f442005e060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:42.671 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.673+0000 7f4449d74640 1 --2- 192.168.123.100:0/1079307273 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f442003dbb0 0x7f442005e060 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f444c058480 tx=0x7f4440007ac0 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:42.672 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.674+0000 7f4450ec9640 1 -- 192.168.123.100:0/1079307273 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f440c005180 con 0x7f444c057710 2026-03-20T11:45:42.672 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.674+0000 7f58b8e01640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f58b405bff0 con 0x7f58b4058020 2026-03-20T11:45:42.672 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.674+0000 7f58b8e01640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f58b405b720 con 0x7f58b4057710 2026-03-20T11:45:42.672 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.674+0000 7f58b1d74640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7f58b4058020 0x7f58b407e420 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:33042/0 (socket says 192.168.123.100:33042) 2026-03-20T11:45:42.672 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.674+0000 7f58b1d74640 1 -- 192.168.123.100:0/384390989 learned_addr learned my addr 192.168.123.100:0/384390989 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:42.672 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.674+0000 7f58b1573640 1 -- 192.168.123.100:0/384390989 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 580587126 0 0) 0x7f58b405bff0 con 0x7f58b4058020 2026-03-20T11:45:42.672 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.674+0000 7f58b1573640 1 -- 192.168.123.100:0/384390989 --> v1:192.168.123.100:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f58a0003610 con 0x7f58b4058020 2026-03-20T11:45:42.675 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.674+0000 7f58b2575640 1 -- 192.168.123.100:0/384390989 >> v1:192.168.123.100:6789/0 conn(0x7f58b4058020 legacy=0x7f58b407e420 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.676 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.674+0000 7f58b2575640 1 -- 192.168.123.100:0/384390989 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f58b405bce0 con 0x7f58b4057710 2026-03-20T11:45:42.676 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.675+0000 7f58b2575640 1 --2- 192.168.123.100:0/384390989 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b4057710 0x7f58b4057ae0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f58a8004770 tx=0x7f58a802ed50 comp rx=0 tx=0).ready entity=mon.0 client_cookie=d172983daf0ae19b server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:42.676 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.677+0000 7f58b1573640 1 -- 192.168.123.100:0/384390989 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f58a803c070 con 0x7f58b4057710 2026-03-20T11:45:42.676 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.677+0000 7f58b1573640 1 -- 192.168.123.100:0/384390989 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f58a802f940 con 0x7f58b4057710 2026-03-20T11:45:42.676 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.677+0000 7f58b1573640 1 -- 192.168.123.100:0/384390989 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f58a802fc40 con 0x7f58b4057710 2026-03-20T11:45:42.676 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.677+0000 7f4436ffd640 1 -- 192.168.123.100:0/1079307273 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f443804d360 con 0x7f444c057710 2026-03-20T11:45:42.680 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.682+0000 7f58b8e01640 1 -- 192.168.123.100:0/384390989 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b4057710 msgr2=0x7f58b4057ae0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.680 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.682+0000 7f58b8e01640 1 --2- 192.168.123.100:0/384390989 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b4057710 0x7f58b4057ae0 secure :-1 s=READY pgs=99 cs=0 l=1 rev1=1 crypto rx=0x7f58a8004770 tx=0x7f58a802ed50 comp rx=0 tx=0).stop 2026-03-20T11:45:42.680 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.682+0000 7f58b8e01640 1 -- 192.168.123.100:0/384390989 shutdown_connections 2026-03-20T11:45:42.680 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.682+0000 7f58b8e01640 1 --2- 192.168.123.100:0/384390989 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b4057710 0x7f58b4057ae0 unknown :-1 s=CLOSED pgs=99 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.680 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.682+0000 7f58b8e01640 1 -- 192.168.123.100:0/384390989 >> 192.168.123.100:0/384390989 conn(0x7f58b4082bf0 msgr2=0x7f58b4082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:42.680 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.682+0000 7f58b8e01640 1 -- 192.168.123.100:0/384390989 shutdown_connections 2026-03-20T11:45:42.680 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.682+0000 7f58b8e01640 1 -- 192.168.123.100:0/384390989 wait complete. 2026-03-20T11:45:42.680 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.682+0000 7f58b8e01640 1 Processor -- start 2026-03-20T11:45:42.680 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.682+0000 7f58b8e01640 1 -- start start 2026-03-20T11:45:42.681 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.683+0000 7f58b8e01640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b4057710 0x7f58b41c9c50 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:42.681 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.683+0000 7f58b8e01640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f58b407eec0 con 0x7f58b4057710 2026-03-20T11:45:42.681 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.683+0000 7f58b2575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b4057710 0x7f58b41c9c50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:42.681 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.683+0000 7f58b2575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b4057710 0x7f58b41c9c50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36278/0 (socket says 192.168.123.100:36278) 2026-03-20T11:45:42.681 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.683+0000 7f58b2575640 1 -- 192.168.123.100:0/1151393233 learned_addr learned my addr 192.168.123.100:0/1151393233 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:42.681 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.683+0000 7f58b2575640 1 -- 192.168.123.100:0/1151393233 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f58b41b4b00 con 0x7f58b4057710 2026-03-20T11:45:42.681 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.683+0000 7f58b2575640 1 --2- 192.168.123.100:0/1151393233 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b4057710 0x7f58b41c9c50 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f58a803a040 tx=0x7f58a80047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:42.681 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.683+0000 7f589affd640 1 -- 192.168.123.100:0/1151393233 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f58a8044070 con 0x7f58b4057710 2026-03-20T11:45:42.681 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.683+0000 7f589affd640 1 -- 192.168.123.100:0/1151393233 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f58a8037c10 con 0x7f58b4057710 2026-03-20T11:45:42.681 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.683+0000 7f58b8e01640 1 -- 192.168.123.100:0/1151393233 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f58b41b3cc0 con 0x7f58b4057710 2026-03-20T11:45:42.682 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.683+0000 7f58b8e01640 1 -- 192.168.123.100:0/1151393233 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f58b41c96c0 con 0x7f58b4057710 2026-03-20T11:45:42.682 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.684+0000 7f58b8e01640 1 -- 192.168.123.100:0/1151393233 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5874005180 con 0x7f58b4057710 2026-03-20T11:45:42.685 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.685+0000 7f589affd640 1 -- 192.168.123.100:0/1151393233 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f58a803c040 con 0x7f58b4057710 2026-03-20T11:45:42.686 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.685+0000 7f589affd640 1 -- 192.168.123.100:0/1151393233 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f58a8051020 con 0x7f58b4057710 2026-03-20T11:45:42.686 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.685+0000 7f589affd640 1 --2- 192.168.123.100:0/1151393233 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f588003dc00 0x7f588005e0b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:42.686 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.685+0000 7f589affd640 1 -- 192.168.123.100:0/1151393233 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f58a8076d10 con 0x7f58b4057710 2026-03-20T11:45:42.686 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.687+0000 7f58b1d74640 1 --2- 192.168.123.100:0/1151393233 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f588003dc00 0x7f588005e0b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:42.686 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.688+0000 7f589affd640 1 -- 192.168.123.100:0/1151393233 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f58a80483b0 con 0x7f58b4057710 2026-03-20T11:45:42.686 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.688+0000 7f58b1d74640 1 --2- 192.168.123.100:0/1151393233 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f588003dc00 0x7f588005e0b0 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f58b4058480 tx=0x7f589c0079e0 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:42.754 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.756+0000 7f6b5a37f640 1 -- 192.168.123.100:0/3808990897 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 2} v 0) -- 0x7f6b14005470 con 0x7f6b5404c270 2026-03-20T11:45:42.755 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.756+0000 7f6b50ff9640 1 -- 192.168.123.100:0/3808990897 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 2}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7f6b44076ae0 con 0x7f6b5404c270 2026-03-20T11:45:42.755 INFO:teuthology.orchestra.run.vm00.stdout:34359738371 2026-03-20T11:45:42.757 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.759+0000 7f6b5a37f640 1 -- 192.168.123.100:0/3808990897 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f6b2403dc50 msgr2=0x7f6b2405e100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.757 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.759+0000 7f6b5a37f640 1 --2- 192.168.123.100:0/3808990897 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f6b2403dc50 0x7f6b2405e100 secure :-1 s=READY pgs=29 cs=0 l=1 rev1=1 crypto rx=0x7f6b40002910 tx=0x7f6b40007b20 comp rx=0 tx=0).stop 2026-03-20T11:45:42.757 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.759+0000 7f6b5a37f640 1 -- 192.168.123.100:0/3808990897 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b5404c270 msgr2=0x7f6b5411aaa0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.757 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.759+0000 7f6b5a37f640 1 --2- 192.168.123.100:0/3808990897 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b5404c270 0x7f6b5411aaa0 secure :-1 s=READY pgs=94 cs=0 l=1 rev1=1 crypto rx=0x7f6b44004770 tx=0x7f6b440047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.758 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.759+0000 7f6b5a37f640 1 -- 192.168.123.100:0/3808990897 shutdown_connections 2026-03-20T11:45:42.758 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.759+0000 7f6b5a37f640 1 --2- 192.168.123.100:0/3808990897 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f6b2403dc50 0x7f6b2405e100 unknown :-1 s=CLOSED pgs=29 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.758 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.759+0000 7f6b5a37f640 1 --2- 192.168.123.100:0/3808990897 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f6b5404c270 0x7f6b5411aaa0 unknown :-1 s=CLOSED pgs=94 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.758 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.759+0000 7f6b5a37f640 1 -- 192.168.123.100:0/3808990897 >> 192.168.123.100:0/3808990897 conn(0x7f6b54082bf0 msgr2=0x7f6b5405b870 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:42.758 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.760+0000 7f6b5a37f640 1 -- 192.168.123.100:0/3808990897 shutdown_connections 2026-03-20T11:45:42.758 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.760+0000 7f6b5a37f640 1 -- 192.168.123.100:0/3808990897 wait complete. 2026-03-20T11:45:42.768 INFO:tasks.ceph.ceph_manager.ceph:need seq 34359738371 got 34359738371 for osd.2 2026-03-20T11:45:42.768 DEBUG:teuthology.parallel:result is None 2026-03-20T11:45:42.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.799+0000 7f4450ec9640 1 -- 192.168.123.100:0/1079307273 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 1} v 0) -- 0x7f440c005470 con 0x7f444c057710 2026-03-20T11:45:42.797 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.799+0000 7f4436ffd640 1 -- 192.168.123.100:0/1079307273 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 1}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7f44380433a0 con 0x7f444c057710 2026-03-20T11:45:42.797 INFO:teuthology.orchestra.run.vm00.stdout:34359738371 2026-03-20T11:45:42.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.802+0000 7f4450ec9640 1 -- 192.168.123.100:0/1079307273 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f442003dbb0 msgr2=0x7f442005e060 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.802+0000 7f4450ec9640 1 --2- 192.168.123.100:0/1079307273 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f442003dbb0 0x7f442005e060 secure :-1 s=READY pgs=30 cs=0 l=1 rev1=1 crypto rx=0x7f444c058480 tx=0x7f4440007ac0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.802+0000 7f4450ec9640 1 -- 192.168.123.100:0/1079307273 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f444c057710 msgr2=0x7f444c1aecd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.802+0000 7f4450ec9640 1 --2- 192.168.123.100:0/1079307273 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f444c057710 0x7f444c1aecd0 secure :-1 s=READY pgs=97 cs=0 l=1 rev1=1 crypto rx=0x7f4438004770 tx=0x7f44380047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.802+0000 7f4450ec9640 1 -- 192.168.123.100:0/1079307273 shutdown_connections 2026-03-20T11:45:42.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.802+0000 7f4450ec9640 1 --2- 192.168.123.100:0/1079307273 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f442003dbb0 0x7f442005e060 unknown :-1 s=CLOSED pgs=30 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.802+0000 7f4450ec9640 1 --2- 192.168.123.100:0/1079307273 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f444c057710 0x7f444c1aecd0 unknown :-1 s=CLOSED pgs=97 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.802+0000 7f4450ec9640 1 -- 192.168.123.100:0/1079307273 >> 192.168.123.100:0/1079307273 conn(0x7f444c082bf0 msgr2=0x7f444c059b60 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:42.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.802+0000 7f4450ec9640 1 -- 192.168.123.100:0/1079307273 shutdown_connections 2026-03-20T11:45:42.800 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.802+0000 7f4450ec9640 1 -- 192.168.123.100:0/1079307273 wait complete. 2026-03-20T11:45:42.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.806+0000 7f58b8e01640 1 -- 192.168.123.100:0/1151393233 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd last-stat-seq", "id": 0} v 0) -- 0x7f5874005470 con 0x7f58b4057710 2026-03-20T11:45:42.804 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.806+0000 7f589affd640 1 -- 192.168.123.100:0/1151393233 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd last-stat-seq", "id": 0}]=0 v0) ==== 74+0+12 (secure 0 0 0) 0x7f58a8033c10 con 0x7f58b4057710 2026-03-20T11:45:42.804 INFO:teuthology.orchestra.run.vm00.stdout:34359738371 2026-03-20T11:45:42.807 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.809+0000 7f58b8e01640 1 -- 192.168.123.100:0/1151393233 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f588003dc00 msgr2=0x7f588005e0b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.807 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.809+0000 7f58b8e01640 1 --2- 192.168.123.100:0/1151393233 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f588003dc00 0x7f588005e0b0 secure :-1 s=READY pgs=31 cs=0 l=1 rev1=1 crypto rx=0x7f58b4058480 tx=0x7f589c0079e0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.807 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.809+0000 7f58b8e01640 1 -- 192.168.123.100:0/1151393233 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b4057710 msgr2=0x7f58b41c9c50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.807 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.809+0000 7f58b8e01640 1 --2- 192.168.123.100:0/1151393233 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b4057710 0x7f58b41c9c50 secure :-1 s=READY pgs=100 cs=0 l=1 rev1=1 crypto rx=0x7f58a803a040 tx=0x7f58a80047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.807 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.809+0000 7f58b8e01640 1 -- 192.168.123.100:0/1151393233 shutdown_connections 2026-03-20T11:45:42.807 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.809+0000 7f58b8e01640 1 --2- 192.168.123.100:0/1151393233 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f588003dc00 0x7f588005e0b0 unknown :-1 s=CLOSED pgs=31 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.807 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.809+0000 7f58b8e01640 1 --2- 192.168.123.100:0/1151393233 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f58b4057710 0x7f58b41c9c50 unknown :-1 s=CLOSED pgs=100 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.809+0000 7f58b8e01640 1 -- 192.168.123.100:0/1151393233 >> 192.168.123.100:0/1151393233 conn(0x7f58b4082bf0 msgr2=0x7f58b407a170 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:42.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.809+0000 7f58b8e01640 1 -- 192.168.123.100:0/1151393233 shutdown_connections 2026-03-20T11:45:42.808 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.809+0000 7f58b8e01640 1 -- 192.168.123.100:0/1151393233 wait complete. 2026-03-20T11:45:42.810 INFO:tasks.ceph.ceph_manager.ceph:need seq 34359738371 got 34359738371 for osd.1 2026-03-20T11:45:42.810 DEBUG:teuthology.parallel:result is None 2026-03-20T11:45:42.816 INFO:tasks.ceph.ceph_manager.ceph:need seq 34359738371 got 34359738371 for osd.0 2026-03-20T11:45:42.816 DEBUG:teuthology.parallel:result is None 2026-03-20T11:45:42.816 INFO:tasks.ceph.ceph_manager.ceph:waiting for clean 2026-03-20T11:45:42.816 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-20T11:45:42.925 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.927+0000 7f5a07f9b640 1 Processor -- start 2026-03-20T11:45:42.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.927+0000 7f5a07f9b640 1 -- start start 2026-03-20T11:45:42.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.927+0000 7f5a07f9b640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f5a00057680 0x7f5a00057a50 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:42.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.927+0000 7f5a07f9b640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f5a00059d60 con 0x7f5a00057f90 2026-03-20T11:45:42.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.927+0000 7f5a07f9b640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f5a00059b30 con 0x7f5a00057680 2026-03-20T11:45:42.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.927+0000 7f5a05d10640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f5a00057680 0x7f5a00057a50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:42.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.927+0000 7f5a05d10640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f5a00057680 0x7f5a00057a50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36294/0 (socket says 192.168.123.100:36294) 2026-03-20T11:45:42.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.927+0000 7f5a05d10640 1 -- 192.168.123.100:0/3167379763 learned_addr learned my addr 192.168.123.100:0/3167379763 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:42.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.927+0000 7f5a05d10640 1 -- 192.168.123.100:0/3167379763 >> v1:192.168.123.100:6789/0 conn(0x7f5a00057f90 legacy=0x7f5a00177580 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.928+0000 7f5a05d10640 1 -- 192.168.123.100:0/3167379763 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5a0005ce60 con 0x7f5a00057680 2026-03-20T11:45:42.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.928+0000 7f5a05d10640 1 --2- 192.168.123.100:0/3167379763 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5a00057680 0x7f5a00057a50 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f59f0009080 tx=0x7f59f002ee70 comp rx=0 tx=0).ready entity=mon.0 client_cookie=82fa6f15033e2cf7 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:42.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.928+0000 7f5a04d0e640 1 -- 192.168.123.100:0/3167379763 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f59f003c070 con 0x7f5a00057680 2026-03-20T11:45:42.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.928+0000 7f5a04d0e640 1 -- 192.168.123.100:0/3167379763 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f59f002fab0 con 0x7f5a00057680 2026-03-20T11:45:42.926 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.928+0000 7f5a04d0e640 1 -- 192.168.123.100:0/3167379763 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f59f002fdb0 con 0x7f5a00057680 2026-03-20T11:45:42.927 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.929+0000 7f5a07f9b640 1 -- 192.168.123.100:0/3167379763 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5a00057680 msgr2=0x7f5a00057a50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:42.927 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.929+0000 7f5a07f9b640 1 --2- 192.168.123.100:0/3167379763 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5a00057680 0x7f5a00057a50 secure :-1 s=READY pgs=102 cs=0 l=1 rev1=1 crypto rx=0x7f59f0009080 tx=0x7f59f002ee70 comp rx=0 tx=0).stop 2026-03-20T11:45:42.927 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.929+0000 7f5a07f9b640 1 -- 192.168.123.100:0/3167379763 shutdown_connections 2026-03-20T11:45:42.927 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.929+0000 7f5a07f9b640 1 --2- 192.168.123.100:0/3167379763 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5a00057680 0x7f5a00057a50 unknown :-1 s=CLOSED pgs=102 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:42.927 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.929+0000 7f5a07f9b640 1 -- 192.168.123.100:0/3167379763 >> 192.168.123.100:0/3167379763 conn(0x7f5a00082bf0 msgr2=0x7f5a00082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:42.927 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.929+0000 7f5a07f9b640 1 -- 192.168.123.100:0/3167379763 shutdown_connections 2026-03-20T11:45:42.927 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.929+0000 7f5a07f9b640 1 -- 192.168.123.100:0/3167379763 wait complete. 2026-03-20T11:45:42.927 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.929+0000 7f5a07f9b640 1 Processor -- start 2026-03-20T11:45:42.927 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.929+0000 7f5a07f9b640 1 -- start start 2026-03-20T11:45:42.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.930+0000 7f5a07f9b640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5a00057680 0x7f5a0007ded0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:42.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.930+0000 7f5a07f9b640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f5a00178eb0 con 0x7f5a00057680 2026-03-20T11:45:42.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.930+0000 7f5a05d10640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5a00057680 0x7f5a0007ded0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:42.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.930+0000 7f5a05d10640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5a00057680 0x7f5a0007ded0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36308/0 (socket says 192.168.123.100:36308) 2026-03-20T11:45:42.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.930+0000 7f5a05d10640 1 -- 192.168.123.100:0/576339774 learned_addr learned my addr 192.168.123.100:0/576339774 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:42.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.930+0000 7f5a05d10640 1 -- 192.168.123.100:0/576339774 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f5a001278b0 con 0x7f5a00057680 2026-03-20T11:45:42.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.930+0000 7f5a05d10640 1 --2- 192.168.123.100:0/576339774 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5a00057680 0x7f5a0007ded0 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7f59f002eeb0 tx=0x7f59f00047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:42.928 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.930+0000 7f59ee7fc640 1 -- 192.168.123.100:0/576339774 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f59f003c070 con 0x7f5a00057680 2026-03-20T11:45:42.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.930+0000 7f59ee7fc640 1 -- 192.168.123.100:0/576339774 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f59f003d040 con 0x7f5a00057680 2026-03-20T11:45:42.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.930+0000 7f5a07f9b640 1 -- 192.168.123.100:0/576339774 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f5a00107d30 con 0x7f5a00057680 2026-03-20T11:45:42.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.931+0000 7f59ee7fc640 1 -- 192.168.123.100:0/576339774 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f59f0004160 con 0x7f5a00057680 2026-03-20T11:45:42.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.931+0000 7f5a07f9b640 1 -- 192.168.123.100:0/576339774 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f5a001094b0 con 0x7f5a00057680 2026-03-20T11:45:42.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.931+0000 7f59ee7fc640 1 -- 192.168.123.100:0/576339774 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f59f0004300 con 0x7f5a00057680 2026-03-20T11:45:42.929 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.932+0000 7f59ee7fc640 1 --2- 192.168.123.100:0/576339774 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f59d003db40 0x7f59d005dff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:42.930 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.932+0000 7f59ee7fc640 1 -- 192.168.123.100:0/576339774 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f59f007c2b0 con 0x7f5a00057680 2026-03-20T11:45:42.930 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.932+0000 7f5a0550f640 1 --2- 192.168.123.100:0/576339774 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f59d003db40 0x7f59d005dff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:42.930 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.932+0000 7f5a0550f640 1 --2- 192.168.123.100:0/576339774 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f59d003db40 0x7f59d005dff0 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f59f4002800 tx=0x7f59f4007b20 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:42.930 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.932+0000 7f5a07f9b640 1 -- 192.168.123.100:0/576339774 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f5a0007ecf0 con 0x7f5a00057680 2026-03-20T11:45:42.932 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:42.934+0000 7f59ee7fc640 1 -- 192.168.123.100:0/576339774 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f59f00363a0 con 0x7f5a00057680 2026-03-20T11:45:43.048 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.050+0000 7f5a07f9b640 1 -- 192.168.123.100:0/576339774 --> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f5a00057a50 con 0x7f59d003db40 2026-03-20T11:45:43.049 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.051+0000 7f59ee7fc640 1 -- 192.168.123.100:0/576339774 <== mgr.4104 v2:192.168.123.100:6824/1022285047 1 ==== mgr_command_reply(tid 0: 0 dumped all) ==== 18+0+31274 (secure 0 0 0) 0x7f5a00057a50 con 0x7f59d003db40 2026-03-20T11:45:43.049 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:43.049 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-20T11:45:43.052 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.054+0000 7f5a07f9b640 1 -- 192.168.123.100:0/576339774 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f59d003db40 msgr2=0x7f59d005dff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:43.052 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.054+0000 7f5a07f9b640 1 --2- 192.168.123.100:0/576339774 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f59d003db40 0x7f59d005dff0 secure :-1 s=READY pgs=32 cs=0 l=1 rev1=1 crypto rx=0x7f59f4002800 tx=0x7f59f4007b20 comp rx=0 tx=0).stop 2026-03-20T11:45:43.052 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.054+0000 7f5a07f9b640 1 -- 192.168.123.100:0/576339774 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5a00057680 msgr2=0x7f5a0007ded0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:43.052 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.054+0000 7f5a07f9b640 1 --2- 192.168.123.100:0/576339774 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5a00057680 0x7f5a0007ded0 secure :-1 s=READY pgs=103 cs=0 l=1 rev1=1 crypto rx=0x7f59f002eeb0 tx=0x7f59f00047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.053 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.054+0000 7f5a07f9b640 1 -- 192.168.123.100:0/576339774 shutdown_connections 2026-03-20T11:45:43.053 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.054+0000 7f5a07f9b640 1 --2- 192.168.123.100:0/576339774 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f59d003db40 0x7f59d005dff0 unknown :-1 s=CLOSED pgs=32 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.053 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.054+0000 7f5a07f9b640 1 --2- 192.168.123.100:0/576339774 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f5a00057680 0x7f5a0007ded0 unknown :-1 s=CLOSED pgs=103 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.053 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.054+0000 7f5a07f9b640 1 -- 192.168.123.100:0/576339774 >> 192.168.123.100:0/576339774 conn(0x7f5a00082bf0 msgr2=0x7f5a00075fd0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:43.053 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.055+0000 7f5a07f9b640 1 -- 192.168.123.100:0/576339774 shutdown_connections 2026-03-20T11:45:43.053 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.055+0000 7f5a07f9b640 1 -- 192.168.123.100:0/576339774 wait complete. 2026-03-20T11:45:43.064 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":17,"stamp":"2026-03-20T11:45:41.637496+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":590387,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":76,"num_read_kb":64,"num_write":125,"num_write_kb":2152,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":95,"ondisk_log_size":95,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":14,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":314572800,"kb_used":82120,"kb_used_data":1544,"kb_used_omap":25,"kb_used_meta":80422,"kb_avail":314490680,"statfs":{"total":322122547200,"available":322038456320,"internally_reserved":0,"allocated":1581056,"data_stored":1291575,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":25680,"internal_metadata":82353072},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"4.571816"},"pg_stats":[{"pgid":"2.7","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.076272+0000","last_change":"2026-03-20T11:45:39.076543+0000","last_active":"2026-03-20T11:45:39.076272+0000","last_peered":"2026-03-20T11:45:39.076272+0000","last_clean":"2026-03-20T11:45:39.076272+0000","last_became_active":"2026-03-20T11:45:37.071043+0000","last_became_peered":"2026-03-20T11:45:37.071043+0000","last_unstale":"2026-03-20T11:45:39.076272+0000","last_undegraded":"2026-03-20T11:45:39.076272+0000","last_fullsized":"2026-03-20T11:45:39.076272+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T13:39:05.124127+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00035895099999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.6","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.076306+0000","last_change":"2026-03-20T11:45:39.076635+0000","last_active":"2026-03-20T11:45:39.076306+0000","last_peered":"2026-03-20T11:45:39.076306+0000","last_clean":"2026-03-20T11:45:39.076306+0000","last_became_active":"2026-03-20T11:45:37.071208+0000","last_became_peered":"2026-03-20T11:45:37.071208+0000","last_unstale":"2026-03-20T11:45:39.076306+0000","last_undegraded":"2026-03-20T11:45:39.076306+0000","last_fullsized":"2026-03-20T11:45:39.076306+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T19:14:18.889470+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00034513600000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.5","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.076558+0000","last_change":"2026-03-20T11:45:39.076747+0000","last_active":"2026-03-20T11:45:39.076558+0000","last_peered":"2026-03-20T11:45:39.076558+0000","last_clean":"2026-03-20T11:45:39.076558+0000","last_became_active":"2026-03-20T11:45:37.070876+0000","last_became_peered":"2026-03-20T11:45:37.070876+0000","last_unstale":"2026-03-20T11:45:39.076558+0000","last_undegraded":"2026-03-20T11:45:39.076558+0000","last_fullsized":"2026-03-20T11:45:39.076558+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T13:13:49.705410+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00042563699999999998,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.4","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.076655+0000","last_change":"2026-03-20T11:45:39.076751+0000","last_active":"2026-03-20T11:45:39.076655+0000","last_peered":"2026-03-20T11:45:39.076655+0000","last_clean":"2026-03-20T11:45:39.076655+0000","last_became_active":"2026-03-20T11:45:37.071101+0000","last_became_peered":"2026-03-20T11:45:37.071101+0000","last_unstale":"2026-03-20T11:45:39.076655+0000","last_undegraded":"2026-03-20T11:45:39.076655+0000","last_fullsized":"2026-03-20T11:45:39.076655+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T23:04:50.026627+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00037377100000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.2","version":"15'2","reported_seq":22,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.074299+0000","last_change":"2026-03-20T11:45:39.074299+0000","last_active":"2026-03-20T11:45:39.074299+0000","last_peered":"2026-03-20T11:45:39.074299+0000","last_clean":"2026-03-20T11:45:39.074299+0000","last_became_active":"2026-03-20T11:45:37.069967+0000","last_became_peered":"2026-03-20T11:45:37.069967+0000","last_unstale":"2026-03-20T11:45:39.074299+0000","last_undegraded":"2026-03-20T11:45:39.074299+0000","last_fullsized":"2026-03-20T11:45:39.074299+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T23:31:07.292041+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000199383,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1],"acting":[0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.1","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.133828+0000","last_change":"2026-03-20T11:45:39.133894+0000","last_active":"2026-03-20T11:45:39.133828+0000","last_peered":"2026-03-20T11:45:39.133828+0000","last_clean":"2026-03-20T11:45:39.133828+0000","last_became_active":"2026-03-20T11:45:37.071626+0000","last_became_peered":"2026-03-20T11:45:37.071626+0000","last_unstale":"2026-03-20T11:45:39.133828+0000","last_undegraded":"2026-03-20T11:45:39.133828+0000","last_fullsized":"2026-03-20T11:45:39.133828+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T15:25:35.757663+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00014619399999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1],"acting":[2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.134005+0000","last_change":"2026-03-20T11:45:39.134071+0000","last_active":"2026-03-20T11:45:39.134005+0000","last_peered":"2026-03-20T11:45:39.134005+0000","last_clean":"2026-03-20T11:45:39.134005+0000","last_became_active":"2026-03-20T11:45:37.071469+0000","last_became_peered":"2026-03-20T11:45:37.071469+0000","last_unstale":"2026-03-20T11:45:39.134005+0000","last_undegraded":"2026-03-20T11:45:39.134005+0000","last_fullsized":"2026-03-20T11:45:39.134005+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T22:11:43.166059+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00017362500000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1],"acting":[2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.3","version":"13'1","reported_seq":21,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.076781+0000","last_change":"2026-03-20T11:45:39.076880+0000","last_active":"2026-03-20T11:45:39.076781+0000","last_peered":"2026-03-20T11:45:39.076781+0000","last_clean":"2026-03-20T11:45:39.076781+0000","last_became_active":"2026-03-20T11:45:37.072122+0000","last_became_peered":"2026-03-20T11:45:37.072122+0000","last_unstale":"2026-03-20T11:45:39.076781+0000","last_undegraded":"2026-03-20T11:45:39.076781+0000","last_fullsized":"2026-03-20T11:45:39.076781+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T21:24:41.113089+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00044625600000000002,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2],"acting":[1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"10'92","reported_seq":134,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.076479+0000","last_change":"2026-03-20T11:45:33.054446+0000","last_active":"2026-03-20T11:45:39.076479+0000","last_peered":"2026-03-20T11:45:39.076479+0000","last_clean":"2026-03-20T11:45:39.076479+0000","last_became_active":"2026-03-20T11:45:33.054214+0000","last_became_peered":"2026-03-20T11:45:33.054214+0000","last_unstale":"2026-03-20T11:45:39.076479+0000","last_undegraded":"2026-03-20T11:45:39.076479+0000","last_fullsized":"2026-03-20T11:45:39.076479+0000","mapping_epoch":9,"log_start":"0'0","ondisk_log_start":"0'0","created":9,"last_epoch_clean":10,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:32.044892+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:32.044892+0000","last_clean_scrub_stamp":"2026-03-20T11:45:32.044892+0000","objects_scrubbed":0,"log_size":92,"log_dups_size":0,"ondisk_log_size":92,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T20:16:06.359935+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":76,"num_read_kb":64,"num_write":123,"num_write_kb":2150,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":3},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":76,"num_read_kb":64,"num_write":123,"num_write_kb":2150,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1187840,"data_stored":1180736,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":92,"ondisk_log_size":92,"up":2,"acting":2,"num_store_stats":2}],"osd_stats":[{"osd":2,"up_from":8,"seq":34359738371,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":26984,"kb_used_data":120,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":104830616,"statfs":{"total":107374182400,"available":107346550784,"internally_reserved":0,"allocated":122880,"data_stored":34563,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8126,"internal_metadata":27451458},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":8,"seq":34359738371,"num_pgs":9,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":27568,"kb_used_data":712,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":104830032,"statfs":{"total":107374182400,"available":107345952768,"internally_reserved":0,"allocated":729088,"data_stored":628506,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738371,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":27568,"kb_used_data":712,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":104830032,"statfs":{"total":107374182400,"available":107345952768,"internally_reserved":0,"allocated":729088,"data_stored":628506,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-20T11:45:43.065 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph pg dump --format=json 2026-03-20T11:45:43.132 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.133+0000 7f0f58d5d640 1 Processor -- start 2026-03-20T11:45:43.132 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.133+0000 7f0f58d5d640 1 -- start start 2026-03-20T11:45:43.132 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.134+0000 7f0f58d5d640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f0f54057680 0x7f0f54057a50 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:43.132 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.134+0000 7f0f58d5d640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f0f54059d60 con 0x7f0f54057f90 2026-03-20T11:45:43.132 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.134+0000 7f0f58d5d640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f0f54059b30 con 0x7f0f54057680 2026-03-20T11:45:43.132 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.134+0000 7f0f51d74640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7f0f54057f90 0x7f0f54177580 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:33056/0 (socket says 192.168.123.100:33056) 2026-03-20T11:45:43.132 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.134+0000 7f0f51d74640 1 -- 192.168.123.100:0/3315792197 learned_addr learned my addr 192.168.123.100:0/3315792197 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:43.132 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.134+0000 7f0f52575640 1 --2- 192.168.123.100:0/3315792197 >> v2:192.168.123.100:3300/0 conn(0x7f0f54057680 0x7f0f54057a50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:43.132 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.134+0000 7f0f52575640 1 -- 192.168.123.100:0/3315792197 >> v1:192.168.123.100:6789/0 conn(0x7f0f54057f90 legacy=0x7f0f54177580 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:43.132 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.134+0000 7f0f51573640 1 -- 192.168.123.100:0/3315792197 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3707955060 0 0) 0x7f0f54059d60 con 0x7f0f54057f90 2026-03-20T11:45:43.132 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.134+0000 7f0f52575640 1 -- 192.168.123.100:0/3315792197 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0f5405ce60 con 0x7f0f54057680 2026-03-20T11:45:43.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.135+0000 7f0f52575640 1 --2- 192.168.123.100:0/3315792197 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f54057680 0x7f0f54057a50 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7f0f48004770 tx=0x7f0f4802eda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=366553a713195e5e server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:43.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.135+0000 7f0f51573640 1 -- 192.168.123.100:0/3315792197 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0f4803c070 con 0x7f0f54057680 2026-03-20T11:45:43.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.135+0000 7f0f51573640 1 -- 192.168.123.100:0/3315792197 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f0f4802f9e0 con 0x7f0f54057680 2026-03-20T11:45:43.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.135+0000 7f0f51573640 1 -- 192.168.123.100:0/3315792197 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0f4802fce0 con 0x7f0f54057680 2026-03-20T11:45:43.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.135+0000 7f0f58d5d640 1 -- 192.168.123.100:0/3315792197 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f54057680 msgr2=0x7f0f54057a50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:43.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.135+0000 7f0f58d5d640 1 --2- 192.168.123.100:0/3315792197 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f54057680 0x7f0f54057a50 secure :-1 s=READY pgs=105 cs=0 l=1 rev1=1 crypto rx=0x7f0f48004770 tx=0x7f0f4802eda0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.135+0000 7f0f58d5d640 1 -- 192.168.123.100:0/3315792197 shutdown_connections 2026-03-20T11:45:43.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.135+0000 7f0f58d5d640 1 --2- 192.168.123.100:0/3315792197 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f54057680 0x7f0f54057a50 unknown :-1 s=CLOSED pgs=105 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.135+0000 7f0f58d5d640 1 -- 192.168.123.100:0/3315792197 >> 192.168.123.100:0/3315792197 conn(0x7f0f54082bf0 msgr2=0x7f0f54082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:43.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.135+0000 7f0f58d5d640 1 -- 192.168.123.100:0/3315792197 shutdown_connections 2026-03-20T11:45:43.133 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.135+0000 7f0f58d5d640 1 -- 192.168.123.100:0/3315792197 wait complete. 2026-03-20T11:45:43.134 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f58d5d640 1 Processor -- start 2026-03-20T11:45:43.134 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f58d5d640 1 -- start start 2026-03-20T11:45:43.134 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f58d5d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f54057680 0x7f0f5407dc90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:43.134 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f58d5d640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f0f54178eb0 con 0x7f0f54057680 2026-03-20T11:45:43.134 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f52575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f54057680 0x7f0f5407dc90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:43.134 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f52575640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f54057680 0x7f0f5407dc90 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36324/0 (socket says 192.168.123.100:36324) 2026-03-20T11:45:43.134 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f52575640 1 -- 192.168.123.100:0/1648167030 learned_addr learned my addr 192.168.123.100:0/1648167030 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:43.134 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f52575640 1 -- 192.168.123.100:0/1648167030 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f0f54108ae0 con 0x7f0f54057680 2026-03-20T11:45:43.134 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f52575640 1 --2- 192.168.123.100:0/1648167030 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f54057680 0x7f0f5407dc90 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f0f48009880 tx=0x7f0f480047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:43.134 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f3affd640 1 -- 192.168.123.100:0/1648167030 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0f48046070 con 0x7f0f54057680 2026-03-20T11:45:43.134 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f3affd640 1 -- 192.168.123.100:0/1648167030 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f0f48037ce0 con 0x7f0f54057680 2026-03-20T11:45:43.134 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f3affd640 1 -- 192.168.123.100:0/1648167030 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f0f4803c040 con 0x7f0f54057680 2026-03-20T11:45:43.135 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f58d5d640 1 -- 192.168.123.100:0/1648167030 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f0f541087b0 con 0x7f0f54057680 2026-03-20T11:45:43.135 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.136+0000 7f0f58d5d640 1 -- 192.168.123.100:0/1648167030 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f0f54109030 con 0x7f0f54057680 2026-03-20T11:45:43.135 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.137+0000 7f0f3affd640 1 -- 192.168.123.100:0/1648167030 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f0f48053020 con 0x7f0f54057680 2026-03-20T11:45:43.135 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.137+0000 7f0f3affd640 1 --2- 192.168.123.100:0/1648167030 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0f2003dc50 0x7f0f2005e100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:43.135 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.137+0000 7f0f3affd640 1 -- 192.168.123.100:0/1648167030 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f0f48077590 con 0x7f0f54057680 2026-03-20T11:45:43.135 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.137+0000 7f0f51d74640 1 --2- 192.168.123.100:0/1648167030 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0f2003dc50 0x7f0f2005e100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:43.135 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.137+0000 7f0f58d5d640 1 -- 192.168.123.100:0/1648167030 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f0f54057a50 con 0x7f0f54057680 2026-03-20T11:45:43.136 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.138+0000 7f0f51d74640 1 --2- 192.168.123.100:0/1648167030 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0f2003dc50 0x7f0f2005e100 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f0f3c0029d0 tx=0x7f0f3c0079e0 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:43.138 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.140+0000 7f0f3affd640 1 -- 192.168.123.100:0/1648167030 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f0f48049360 con 0x7f0f54057680 2026-03-20T11:45:43.251 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.253+0000 7f0f58d5d640 1 -- 192.168.123.100:0/1648167030 --> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] -- mgr_command(tid 0: {"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}) -- 0x7f0f5405e930 con 0x7f0f2003dc50 2026-03-20T11:45:43.252 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.254+0000 7f0f3affd640 1 -- 192.168.123.100:0/1648167030 <== mgr.4104 v2:192.168.123.100:6824/1022285047 1 ==== mgr_command_reply(tid 0: 0 dumped all) ==== 18+0+31274 (secure 0 0 0) 0x7f0f5405e930 con 0x7f0f2003dc50 2026-03-20T11:45:43.252 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:43.252 INFO:teuthology.orchestra.run.vm00.stderr:dumped all 2026-03-20T11:45:43.255 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.257+0000 7f0f58d5d640 1 -- 192.168.123.100:0/1648167030 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0f2003dc50 msgr2=0x7f0f2005e100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:43.255 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.257+0000 7f0f58d5d640 1 --2- 192.168.123.100:0/1648167030 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0f2003dc50 0x7f0f2005e100 secure :-1 s=READY pgs=33 cs=0 l=1 rev1=1 crypto rx=0x7f0f3c0029d0 tx=0x7f0f3c0079e0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.255 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.257+0000 7f0f58d5d640 1 -- 192.168.123.100:0/1648167030 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f54057680 msgr2=0x7f0f5407dc90 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:43.255 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.257+0000 7f0f58d5d640 1 --2- 192.168.123.100:0/1648167030 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f54057680 0x7f0f5407dc90 secure :-1 s=READY pgs=106 cs=0 l=1 rev1=1 crypto rx=0x7f0f48009880 tx=0x7f0f480047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.255 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.257+0000 7f0f58d5d640 1 -- 192.168.123.100:0/1648167030 shutdown_connections 2026-03-20T11:45:43.255 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.257+0000 7f0f58d5d640 1 --2- 192.168.123.100:0/1648167030 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f0f2003dc50 0x7f0f2005e100 unknown :-1 s=CLOSED pgs=33 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.255 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.257+0000 7f0f58d5d640 1 --2- 192.168.123.100:0/1648167030 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f0f54057680 0x7f0f5407dc90 unknown :-1 s=CLOSED pgs=106 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.256 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.257+0000 7f0f58d5d640 1 -- 192.168.123.100:0/1648167030 >> 192.168.123.100:0/1648167030 conn(0x7f0f54082bf0 msgr2=0x7f0f54075ea0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:43.256 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.257+0000 7f0f58d5d640 1 -- 192.168.123.100:0/1648167030 shutdown_connections 2026-03-20T11:45:43.256 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.257+0000 7f0f58d5d640 1 -- 192.168.123.100:0/1648167030 wait complete. 2026-03-20T11:45:43.265 INFO:teuthology.orchestra.run.vm00.stdout:{"pg_ready":true,"pg_map":{"version":17,"stamp":"2026-03-20T11:45:41.637496+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":590387,"num_objects":4,"num_object_clones":0,"num_object_copies":8,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":4,"num_whiteouts":0,"num_read":76,"num_read_kb":64,"num_write":125,"num_write_kb":2152,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":95,"ondisk_log_size":95,"up":18,"acting":18,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":14,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":314572800,"kb_used":82120,"kb_used_data":1544,"kb_used_omap":25,"kb_used_meta":80422,"kb_avail":314490680,"statfs":{"total":322122547200,"available":322038456320,"internally_reserved":0,"allocated":1581056,"data_stored":1291575,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":25680,"internal_metadata":82353072},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"4.571816"},"pg_stats":[{"pgid":"2.7","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.076272+0000","last_change":"2026-03-20T11:45:39.076543+0000","last_active":"2026-03-20T11:45:39.076272+0000","last_peered":"2026-03-20T11:45:39.076272+0000","last_clean":"2026-03-20T11:45:39.076272+0000","last_became_active":"2026-03-20T11:45:37.071043+0000","last_became_peered":"2026-03-20T11:45:37.071043+0000","last_unstale":"2026-03-20T11:45:39.076272+0000","last_undegraded":"2026-03-20T11:45:39.076272+0000","last_fullsized":"2026-03-20T11:45:39.076272+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T13:39:05.124127+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00035895099999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.6","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.076306+0000","last_change":"2026-03-20T11:45:39.076635+0000","last_active":"2026-03-20T11:45:39.076306+0000","last_peered":"2026-03-20T11:45:39.076306+0000","last_clean":"2026-03-20T11:45:39.076306+0000","last_became_active":"2026-03-20T11:45:37.071208+0000","last_became_peered":"2026-03-20T11:45:37.071208+0000","last_unstale":"2026-03-20T11:45:39.076306+0000","last_undegraded":"2026-03-20T11:45:39.076306+0000","last_fullsized":"2026-03-20T11:45:39.076306+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T19:14:18.889470+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00034513600000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.5","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.076558+0000","last_change":"2026-03-20T11:45:39.076747+0000","last_active":"2026-03-20T11:45:39.076558+0000","last_peered":"2026-03-20T11:45:39.076558+0000","last_clean":"2026-03-20T11:45:39.076558+0000","last_became_active":"2026-03-20T11:45:37.070876+0000","last_became_peered":"2026-03-20T11:45:37.070876+0000","last_unstale":"2026-03-20T11:45:39.076558+0000","last_undegraded":"2026-03-20T11:45:39.076558+0000","last_fullsized":"2026-03-20T11:45:39.076558+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T13:13:49.705410+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00042563699999999998,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.4","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.076655+0000","last_change":"2026-03-20T11:45:39.076751+0000","last_active":"2026-03-20T11:45:39.076655+0000","last_peered":"2026-03-20T11:45:39.076655+0000","last_clean":"2026-03-20T11:45:39.076655+0000","last_became_active":"2026-03-20T11:45:37.071101+0000","last_became_peered":"2026-03-20T11:45:37.071101+0000","last_unstale":"2026-03-20T11:45:39.076655+0000","last_undegraded":"2026-03-20T11:45:39.076655+0000","last_fullsized":"2026-03-20T11:45:39.076655+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T23:04:50.026627+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00037377100000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.2","version":"15'2","reported_seq":22,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.074299+0000","last_change":"2026-03-20T11:45:39.074299+0000","last_active":"2026-03-20T11:45:39.074299+0000","last_peered":"2026-03-20T11:45:39.074299+0000","last_clean":"2026-03-20T11:45:39.074299+0000","last_became_active":"2026-03-20T11:45:37.069967+0000","last_became_peered":"2026-03-20T11:45:37.069967+0000","last_unstale":"2026-03-20T11:45:39.074299+0000","last_undegraded":"2026-03-20T11:45:39.074299+0000","last_fullsized":"2026-03-20T11:45:39.074299+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":2,"log_dups_size":0,"ondisk_log_size":2,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T23:31:07.292041+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.000199383,"stat_sum":{"num_bytes":19,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[0,1],"acting":[0,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":0,"acting_primary":0,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.1","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.133828+0000","last_change":"2026-03-20T11:45:39.133894+0000","last_active":"2026-03-20T11:45:39.133828+0000","last_peered":"2026-03-20T11:45:39.133828+0000","last_clean":"2026-03-20T11:45:39.133828+0000","last_became_active":"2026-03-20T11:45:37.071626+0000","last_became_peered":"2026-03-20T11:45:37.071626+0000","last_unstale":"2026-03-20T11:45:39.133828+0000","last_undegraded":"2026-03-20T11:45:39.133828+0000","last_fullsized":"2026-03-20T11:45:39.133828+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T15:25:35.757663+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00014619399999999999,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1],"acting":[2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.0","version":"0'0","reported_seq":20,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.134005+0000","last_change":"2026-03-20T11:45:39.134071+0000","last_active":"2026-03-20T11:45:39.134005+0000","last_peered":"2026-03-20T11:45:39.134005+0000","last_clean":"2026-03-20T11:45:39.134005+0000","last_became_active":"2026-03-20T11:45:37.071469+0000","last_became_peered":"2026-03-20T11:45:37.071469+0000","last_unstale":"2026-03-20T11:45:39.134005+0000","last_undegraded":"2026-03-20T11:45:39.134005+0000","last_fullsized":"2026-03-20T11:45:39.134005+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":0,"log_dups_size":0,"ondisk_log_size":0,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T22:11:43.166059+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00017362500000000001,"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[2,1],"acting":[2,1],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":2,"acting_primary":2,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"2.3","version":"13'1","reported_seq":21,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.076781+0000","last_change":"2026-03-20T11:45:39.076880+0000","last_active":"2026-03-20T11:45:39.076781+0000","last_peered":"2026-03-20T11:45:39.076781+0000","last_clean":"2026-03-20T11:45:39.076781+0000","last_became_active":"2026-03-20T11:45:37.072122+0000","last_became_peered":"2026-03-20T11:45:37.072122+0000","last_unstale":"2026-03-20T11:45:39.076781+0000","last_undegraded":"2026-03-20T11:45:39.076781+0000","last_fullsized":"2026-03-20T11:45:39.076781+0000","mapping_epoch":12,"log_start":"0'0","ondisk_log_start":"0'0","created":12,"last_epoch_clean":13,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:36.059830+0000","last_clean_scrub_stamp":"2026-03-20T11:45:36.059830+0000","objects_scrubbed":0,"log_size":1,"log_dups_size":0,"ondisk_log_size":1,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T21:24:41.113089+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0.00044625600000000002,"stat_sum":{"num_bytes":0,"num_objects":1,"num_object_clones":0,"num_object_copies":2,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":1,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2],"acting":[1,2],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[{"start":"2","length":"1"}]},{"pgid":"1.0","version":"10'92","reported_seq":134,"reported_epoch":15,"state":"active+clean","last_fresh":"2026-03-20T11:45:39.076479+0000","last_change":"2026-03-20T11:45:33.054446+0000","last_active":"2026-03-20T11:45:39.076479+0000","last_peered":"2026-03-20T11:45:39.076479+0000","last_clean":"2026-03-20T11:45:39.076479+0000","last_became_active":"2026-03-20T11:45:33.054214+0000","last_became_peered":"2026-03-20T11:45:33.054214+0000","last_unstale":"2026-03-20T11:45:39.076479+0000","last_undegraded":"2026-03-20T11:45:39.076479+0000","last_fullsized":"2026-03-20T11:45:39.076479+0000","mapping_epoch":9,"log_start":"0'0","ondisk_log_start":"0'0","created":9,"last_epoch_clean":10,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-20T11:45:32.044892+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-20T11:45:32.044892+0000","last_clean_scrub_stamp":"2026-03-20T11:45:32.044892+0000","objects_scrubbed":0,"log_size":92,"log_dups_size":0,"ondisk_log_size":92,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-21T20:16:06.359935+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":76,"num_read_kb":64,"num_write":123,"num_write_kb":2150,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,0],"acting":[1,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":2,"num_pg":8,"stat_sum":{"num_bytes":19,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":2,"num_write_kb":2,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":8192,"data_stored":38,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":3,"ondisk_log_size":3,"up":16,"acting":16,"num_store_stats":3},{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":590368,"num_objects":2,"num_object_clones":0,"num_object_copies":4,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":76,"num_read_kb":64,"num_write":123,"num_write_kb":2150,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1187840,"data_stored":1180736,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":92,"ondisk_log_size":92,"up":2,"acting":2,"num_store_stats":2}],"osd_stats":[{"osd":2,"up_from":8,"seq":34359738371,"num_pgs":3,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":26984,"kb_used_data":120,"kb_used_omap":7,"kb_used_meta":26808,"kb_avail":104830616,"statfs":{"total":107374182400,"available":107346550784,"internally_reserved":0,"allocated":122880,"data_stored":34563,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8126,"internal_metadata":27451458},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":8,"seq":34359738371,"num_pgs":9,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":27568,"kb_used_data":712,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":104830032,"statfs":{"total":107374182400,"available":107345952768,"internally_reserved":0,"allocated":729088,"data_stored":628506,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738371,"num_pgs":2,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":104857600,"kb_used":27568,"kb_used_data":712,"kb_used_omap":8,"kb_used_meta":26807,"kb_avail":104830032,"statfs":{"total":107374182400,"available":107345952768,"internally_reserved":0,"allocated":729088,"data_stored":628506,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":8777,"internal_metadata":27450807},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":1,"apply_latency_ms":1,"commit_latency_ns":1000000,"apply_latency_ns":1000000},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":593920,"data_stored":590368,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":4096,"data_stored":19,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":2,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-20T11:45:43.265 INFO:tasks.ceph.ceph_manager.ceph:clean! 2026-03-20T11:45:43.265 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-20T11:45:43.266 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy 2026-03-20T11:45:43.266 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph health --format=json 2026-03-20T11:45:43.334 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.335+0000 7fd64d906640 1 Processor -- start 2026-03-20T11:45:43.334 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.335+0000 7fd64d906640 1 -- start start 2026-03-20T11:45:43.334 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.336+0000 7fd64d906640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7fd648151da0 0x7fd648172180 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:43.334 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.336+0000 7fd64d906640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7fd64805bff0 con 0x7fd648057710 2026-03-20T11:45:43.334 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.336+0000 7fd64d906640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7fd64805b720 con 0x7fd648151da0 2026-03-20T11:45:43.334 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.336+0000 7fd6467fc640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7fd648151da0 0x7fd648172180 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:43.334 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.336+0000 7fd6467fc640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7fd648151da0 0x7fd648172180 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36328/0 (socket says 192.168.123.100:36328) 2026-03-20T11:45:43.334 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.336+0000 7fd6467fc640 1 -- 192.168.123.100:0/889771706 learned_addr learned my addr 192.168.123.100:0/889771706 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:43.334 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.336+0000 7fd646ffd640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7fd648057710 0x7fd648057ae0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:33068/0 (socket says 192.168.123.100:33068) 2026-03-20T11:45:43.334 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.336+0000 7fd645ffb640 1 -- 192.168.123.100:0/889771706 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 1072090134 0 0) 0x7fd64805bff0 con 0x7fd648057710 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.336+0000 7fd645ffb640 1 -- 192.168.123.100:0/889771706 --> v1:192.168.123.100:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7fd634003610 con 0x7fd648057710 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.336+0000 7fd6467fc640 1 -- 192.168.123.100:0/889771706 >> v1:192.168.123.100:6789/0 conn(0x7fd648057710 legacy=0x7fd648057ae0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.336+0000 7fd6467fc640 1 -- 192.168.123.100:0/889771706 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd64805bce0 con 0x7fd648151da0 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.337+0000 7fd6467fc640 1 --2- 192.168.123.100:0/889771706 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd648151da0 0x7fd648172180 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7fd63c002b60 tx=0x7fd63002ecc0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=d985cc13581e67c0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.337+0000 7fd645ffb640 1 -- 192.168.123.100:0/889771706 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd63c002d80 con 0x7fd648151da0 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.337+0000 7fd645ffb640 1 -- 192.168.123.100:0/889771706 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fd630030850 con 0x7fd648151da0 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.337+0000 7fd645ffb640 1 -- 192.168.123.100:0/889771706 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd630030b50 con 0x7fd648151da0 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.337+0000 7fd64d906640 1 -- 192.168.123.100:0/889771706 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd648151da0 msgr2=0x7fd648172180 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.337+0000 7fd64d906640 1 --2- 192.168.123.100:0/889771706 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd648151da0 0x7fd648172180 secure :-1 s=READY pgs=108 cs=0 l=1 rev1=1 crypto rx=0x7fd63c002b60 tx=0x7fd63002ecc0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.337+0000 7fd64d906640 1 -- 192.168.123.100:0/889771706 shutdown_connections 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.337+0000 7fd64d906640 1 --2- 192.168.123.100:0/889771706 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd648151da0 0x7fd648172180 unknown :-1 s=CLOSED pgs=108 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.337+0000 7fd64d906640 1 -- 192.168.123.100:0/889771706 >> 192.168.123.100:0/889771706 conn(0x7fd648082bf0 msgr2=0x7fd648082ff0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.337+0000 7fd64d906640 1 -- 192.168.123.100:0/889771706 shutdown_connections 2026-03-20T11:45:43.335 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.337+0000 7fd64d906640 1 -- 192.168.123.100:0/889771706 wait complete. 2026-03-20T11:45:43.336 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.338+0000 7fd64d906640 1 Processor -- start 2026-03-20T11:45:43.336 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.338+0000 7fd64d906640 1 -- start start 2026-03-20T11:45:43.336 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.338+0000 7fd64d906640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd648057710 0x7fd648171f80 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:43.336 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.338+0000 7fd64d906640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fd648172840 con 0x7fd648057710 2026-03-20T11:45:43.336 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.338+0000 7fd646ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd648057710 0x7fd648171f80 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:43.336 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.338+0000 7fd646ffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd648057710 0x7fd648171f80 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36340/0 (socket says 192.168.123.100:36340) 2026-03-20T11:45:43.336 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.338+0000 7fd646ffd640 1 -- 192.168.123.100:0/554210882 learned_addr learned my addr 192.168.123.100:0/554210882 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:43.336 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.338+0000 7fd646ffd640 1 -- 192.168.123.100:0/554210882 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fd6481724c0 con 0x7fd648057710 2026-03-20T11:45:43.336 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.338+0000 7fd646ffd640 1 --2- 192.168.123.100:0/554210882 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd648057710 0x7fd648171f80 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7fd63c00c9f0 tx=0x7fd63c00cec0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:43.336 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.339+0000 7fd61ffff640 1 -- 192.168.123.100:0/554210882 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd63c018070 con 0x7fd648057710 2026-03-20T11:45:43.336 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.339+0000 7fd61ffff640 1 -- 192.168.123.100:0/554210882 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fd63c004c10 con 0x7fd648057710 2026-03-20T11:45:43.337 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.339+0000 7fd61ffff640 1 -- 192.168.123.100:0/554210882 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fd63c0070d0 con 0x7fd648057710 2026-03-20T11:45:43.337 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.339+0000 7fd64d906640 1 -- 192.168.123.100:0/554210882 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fd64805a690 con 0x7fd648057710 2026-03-20T11:45:43.337 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.339+0000 7fd64d906640 1 -- 192.168.123.100:0/554210882 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fd648058c90 con 0x7fd648057710 2026-03-20T11:45:43.338 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.340+0000 7fd64d906640 1 -- 192.168.123.100:0/554210882 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fd648157560 con 0x7fd648057710 2026-03-20T11:45:43.338 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.340+0000 7fd61ffff640 1 -- 192.168.123.100:0/554210882 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7fd63c007270 con 0x7fd648057710 2026-03-20T11:45:43.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.343+0000 7fd61ffff640 1 --2- 192.168.123.100:0/554210882 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fd61803dbb0 0x7fd61805e060 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:43.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.343+0000 7fd61ffff640 1 -- 192.168.123.100:0/554210882 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7fd63c051ab0 con 0x7fd648057710 2026-03-20T11:45:43.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.343+0000 7fd61ffff640 1 -- 192.168.123.100:0/554210882 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7fd63c00ab30 con 0x7fd648057710 2026-03-20T11:45:43.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.343+0000 7fd6467fc640 1 --2- 192.168.123.100:0/554210882 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fd61803dbb0 0x7fd61805e060 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:43.341 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.343+0000 7fd6467fc640 1 --2- 192.168.123.100:0/554210882 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fd61803dbb0 0x7fd61805e060 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7fd630004770 tx=0x7fd630008030 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:43.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.488+0000 7fd64d906640 1 -- 192.168.123.100:0/554210882 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "health", "format": "json"} v 0) -- 0x7fd648057ae0 con 0x7fd648057710 2026-03-20T11:45:43.486 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.488+0000 7fd61ffff640 1 -- 192.168.123.100:0/554210882 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "health", "format": "json"}]=0 v0) ==== 72+0+46 (secure 0 0 0) 0x7fd63c0052a0 con 0x7fd648057710 2026-03-20T11:45:43.486 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-20T11:45:43.486 INFO:teuthology.orchestra.run.vm00.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-20T11:45:43.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.491+0000 7fd64d906640 1 -- 192.168.123.100:0/554210882 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fd61803dbb0 msgr2=0x7fd61805e060 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:43.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.491+0000 7fd64d906640 1 --2- 192.168.123.100:0/554210882 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fd61803dbb0 0x7fd61805e060 secure :-1 s=READY pgs=34 cs=0 l=1 rev1=1 crypto rx=0x7fd630004770 tx=0x7fd630008030 comp rx=0 tx=0).stop 2026-03-20T11:45:43.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.491+0000 7fd64d906640 1 -- 192.168.123.100:0/554210882 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd648057710 msgr2=0x7fd648171f80 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:43.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.491+0000 7fd64d906640 1 --2- 192.168.123.100:0/554210882 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd648057710 0x7fd648171f80 secure :-1 s=READY pgs=109 cs=0 l=1 rev1=1 crypto rx=0x7fd63c00c9f0 tx=0x7fd63c00cec0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.491+0000 7fd64d906640 1 -- 192.168.123.100:0/554210882 shutdown_connections 2026-03-20T11:45:43.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.491+0000 7fd64d906640 1 --2- 192.168.123.100:0/554210882 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fd61803dbb0 0x7fd61805e060 unknown :-1 s=CLOSED pgs=34 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.491+0000 7fd64d906640 1 --2- 192.168.123.100:0/554210882 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fd648057710 0x7fd648171f80 unknown :-1 s=CLOSED pgs=109 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.491+0000 7fd64d906640 1 -- 192.168.123.100:0/554210882 >> 192.168.123.100:0/554210882 conn(0x7fd648082bf0 msgr2=0x7fd648077fb0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:43.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.491+0000 7fd64d906640 1 -- 192.168.123.100:0/554210882 shutdown_connections 2026-03-20T11:45:43.489 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.491+0000 7fd64d906640 1 -- 192.168.123.100:0/554210882 wait complete. 2026-03-20T11:45:43.498 INFO:tasks.ceph.ceph_manager.ceph:wait_until_healthy done 2026-03-20T11:45:43.498 INFO:teuthology.run_tasks:Running task rgw... 2026-03-20T11:45:43.502 DEBUG:tasks.rgw:config is {'client.0': {'dns-name': ''}} 2026-03-20T11:45:43.502 DEBUG:tasks.rgw:client list is dict_keys(['client.0']) 2026-03-20T11:45:43.502 INFO:tasks.rgw:Creating data pools 2026-03-20T11:45:43.502 DEBUG:tasks.rgw:Obtaining remote for client client.0 2026-03-20T11:45:43.502 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool create default.rgw.buckets.data 64 64 --cluster ceph 2026-03-20T11:45:43.568 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.570+0000 7f248087c640 1 Processor -- start 2026-03-20T11:45:43.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.570+0000 7f248087c640 1 -- start start 2026-03-20T11:45:43.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f248087c640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f2478137550 0x7f2478130a40 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:43.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f248087c640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f247805aa90 con 0x7f2478137180 2026-03-20T11:45:43.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f248087c640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f2478059870 con 0x7f2478137550 2026-03-20T11:45:43.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f247e5f1640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7f2478137180 0x7f2478130330 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:33070/0 (socket says 192.168.123.100:33070) 2026-03-20T11:45:43.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f247e5f1640 1 -- 192.168.123.100:0/1983881593 learned_addr learned my addr 192.168.123.100:0/1983881593 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:43.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f247ddf0640 1 --2- 192.168.123.100:0/1983881593 >> v2:192.168.123.100:3300/0 conn(0x7f2478137550 0x7f2478130a40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:43.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f247d5ef640 1 -- 192.168.123.100:0/1983881593 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3928547019 0 0) 0x7f247805aa90 con 0x7f2478137180 2026-03-20T11:45:43.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f247d5ef640 1 -- 192.168.123.100:0/1983881593 --> v1:192.168.123.100:6789/0 -- auth(proto 2 36 bytes epoch 0) -- 0x7f2468003610 con 0x7f2478137180 2026-03-20T11:45:43.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f247d5ef640 1 -- 192.168.123.100:0/1983881593 <== mon.0 v1:192.168.123.100:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) ==== 214+0+0 (unknown 1317137319 0 0) 0x7f2468003610 con 0x7f2478137180 2026-03-20T11:45:43.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f247d5ef640 1 -- 192.168.123.100:0/1983881593 >> v2:192.168.123.100:3300/0 conn(0x7f2478137550 msgr2=0x7f2478130a40 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).mark_down 2026-03-20T11:45:43.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f247d5ef640 1 --2- 192.168.123.100:0/1983881593 >> v2:192.168.123.100:3300/0 conn(0x7f2478137550 0x7f2478130a40 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.569 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f247d5ef640 1 -- 192.168.123.100:0/1983881593 --> v1:192.168.123.100:6789/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2478137970 con 0x7f2478137180 2026-03-20T11:45:43.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f247d5ef640 1 -- 192.168.123.100:0/1983881593 <== mon.0 v1:192.168.123.100:6789/0 3 ==== mon_map magic: 0 ==== 205+0+0 (unknown 2760865362 0 0) 0x7f2470002d80 con 0x7f2478137180 2026-03-20T11:45:43.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f247d5ef640 1 -- 192.168.123.100:0/1983881593 >> v1:192.168.123.100:6789/0 conn(0x7f2478137180 legacy=0x7f2478130330 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:43.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f247d5ef640 1 --2- 192.168.123.100:0/1983881593 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2468003b60 0x7f2468003f50 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:43.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.571+0000 7f247d5ef640 1 -- 192.168.123.100:0/1983881593 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f2478059870 con 0x7f2468003b60 2026-03-20T11:45:43.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.572+0000 7f247e5f1640 1 --2- 192.168.123.100:0/1983881593 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2468003b60 0x7f2468003f50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:43.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.572+0000 7f247e5f1640 1 -- 192.168.123.100:0/1983881593 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2478137970 con 0x7f2468003b60 2026-03-20T11:45:43.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.572+0000 7f247e5f1640 1 --2- 192.168.123.100:0/1983881593 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2468003b60 0x7f2468003f50 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f2470002800 tx=0x7f247002f6b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=702fd09ce5945fb4 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:43.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.572+0000 7f247d5ef640 1 -- 192.168.123.100:0/1983881593 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2470007d90 con 0x7f2468003b60 2026-03-20T11:45:43.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.572+0000 7f247d5ef640 1 -- 192.168.123.100:0/1983881593 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f2470005af0 con 0x7f2468003b60 2026-03-20T11:45:43.570 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.572+0000 7f247d5ef640 1 -- 192.168.123.100:0/1983881593 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2470005e10 con 0x7f2468003b60 2026-03-20T11:45:43.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.573+0000 7f248087c640 1 -- 192.168.123.100:0/1983881593 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2468003b60 msgr2=0x7f2468003f50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:43.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.573+0000 7f248087c640 1 --2- 192.168.123.100:0/1983881593 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2468003b60 0x7f2468003f50 secure :-1 s=READY pgs=111 cs=0 l=1 rev1=1 crypto rx=0x7f2470002800 tx=0x7f247002f6b0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.573+0000 7f248087c640 1 -- 192.168.123.100:0/1983881593 shutdown_connections 2026-03-20T11:45:43.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.573+0000 7f248087c640 1 --2- 192.168.123.100:0/1983881593 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2468003b60 0x7f2468003f50 unknown :-1 s=CLOSED pgs=111 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.573+0000 7f248087c640 1 --2- 192.168.123.100:0/1983881593 >> v2:192.168.123.100:3300/0 conn(0x7f2478137550 0x7f2478130a40 unknown :-1 s=CLOSED pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:43.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.573+0000 7f248087c640 1 -- 192.168.123.100:0/1983881593 >> 192.168.123.100:0/1983881593 conn(0x7f2478082850 msgr2=0x7f2478082c50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:43.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.573+0000 7f248087c640 1 -- 192.168.123.100:0/1983881593 shutdown_connections 2026-03-20T11:45:43.571 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.573+0000 7f248087c640 1 -- 192.168.123.100:0/1983881593 wait complete. 2026-03-20T11:45:43.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.574+0000 7f248087c640 1 Processor -- start 2026-03-20T11:45:43.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.574+0000 7f248087c640 1 -- start start 2026-03-20T11:45:43.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.574+0000 7f248087c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2468003b60 0x7f2478077dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:43.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.574+0000 7f248087c640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f2478130f80 con 0x7f2468003b60 2026-03-20T11:45:43.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.574+0000 7f247e5f1640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2468003b60 0x7f2478077dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:43.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.574+0000 7f247e5f1640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2468003b60 0x7f2478077dd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36364/0 (socket says 192.168.123.100:36364) 2026-03-20T11:45:43.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.574+0000 7f247e5f1640 1 -- 192.168.123.100:0/3264079491 learned_addr learned my addr 192.168.123.100:0/3264079491 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:43.572 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.574+0000 7f247e5f1640 1 -- 192.168.123.100:0/3264079491 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f2478079bc0 con 0x7f2468003b60 2026-03-20T11:45:43.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.575+0000 7f247e5f1640 1 --2- 192.168.123.100:0/3264079491 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2468003b60 0x7f2478077dd0 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f247002f6f0 tx=0x7f24700030b0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:43.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.575+0000 7f2462ffd640 1 -- 192.168.123.100:0/3264079491 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f2470045020 con 0x7f2468003b60 2026-03-20T11:45:43.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.575+0000 7f2462ffd640 1 -- 192.168.123.100:0/3264079491 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f2470002c10 con 0x7f2468003b60 2026-03-20T11:45:43.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.575+0000 7f2462ffd640 1 -- 192.168.123.100:0/3264079491 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f247003c040 con 0x7f2468003b60 2026-03-20T11:45:43.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.575+0000 7f248087c640 1 -- 192.168.123.100:0/3264079491 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f247807a440 con 0x7f2468003b60 2026-03-20T11:45:43.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.575+0000 7f248087c640 1 -- 192.168.123.100:0/3264079491 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f2478078310 con 0x7f2468003b60 2026-03-20T11:45:43.573 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.575+0000 7f2462ffd640 1 -- 192.168.123.100:0/3264079491 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f2470052020 con 0x7f2468003b60 2026-03-20T11:45:43.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.576+0000 7f248087c640 1 -- 192.168.123.100:0/3264079491 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f2478079f60 con 0x7f2468003b60 2026-03-20T11:45:43.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.576+0000 7f2462ffd640 1 --2- 192.168.123.100:0/3264079491 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f244803dc50 0x7f244805e100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:43.574 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.576+0000 7f2462ffd640 1 -- 192.168.123.100:0/3264079491 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(15..15 src has 1..15) ==== 2987+0+0 (secure 0 0 0) 0x7f2470076ed0 con 0x7f2468003b60 2026-03-20T11:45:43.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.578+0000 7f247ddf0640 1 --2- 192.168.123.100:0/3264079491 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f244803dc50 0x7f244805e100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:43.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.578+0000 7f2462ffd640 1 -- 192.168.123.100:0/3264079491 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f2470050880 con 0x7f2468003b60 2026-03-20T11:45:43.576 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.578+0000 7f247ddf0640 1 --2- 192.168.123.100:0/3264079491 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f244803dc50 0x7f244805e100 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f2464004770 tx=0x7f2464006f90 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:43.689 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:43.691+0000 7f248087c640 1 -- 192.168.123.100:0/3264079491 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "default.rgw.buckets.data", "pg_num": 64, "pgp_num": 64} v 0) -- 0x7f2478130330 con 0x7f2468003b60 2026-03-20T11:45:44.085 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.086+0000 7f2462ffd640 1 -- 192.168.123.100:0/3264079491 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "default.rgw.buckets.data", "pg_num": 64, "pgp_num": 64}]=0 pool 'default.rgw.buckets.data' created v16) ==== 167+0+0 (secure 0 0 0) 0x7f2470078070 con 0x7f2468003b60 2026-03-20T11:45:44.085 INFO:teuthology.orchestra.run.vm00.stderr:pool 'default.rgw.buckets.data' created 2026-03-20T11:45:44.097 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.097+0000 7f248087c640 1 -- 192.168.123.100:0/3264079491 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f244803dc50 msgr2=0x7f244805e100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:44.097 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.097+0000 7f248087c640 1 --2- 192.168.123.100:0/3264079491 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f244803dc50 0x7f244805e100 secure :-1 s=READY pgs=35 cs=0 l=1 rev1=1 crypto rx=0x7f2464004770 tx=0x7f2464006f90 comp rx=0 tx=0).stop 2026-03-20T11:45:44.097 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.097+0000 7f248087c640 1 -- 192.168.123.100:0/3264079491 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2468003b60 msgr2=0x7f2478077dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:44.097 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.097+0000 7f248087c640 1 --2- 192.168.123.100:0/3264079491 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2468003b60 0x7f2478077dd0 secure :-1 s=READY pgs=112 cs=0 l=1 rev1=1 crypto rx=0x7f247002f6f0 tx=0x7f24700030b0 comp rx=0 tx=0).stop 2026-03-20T11:45:44.098 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.099+0000 7f248087c640 1 -- 192.168.123.100:0/3264079491 shutdown_connections 2026-03-20T11:45:44.098 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.099+0000 7f248087c640 1 --2- 192.168.123.100:0/3264079491 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f244803dc50 0x7f244805e100 unknown :-1 s=CLOSED pgs=35 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:44.098 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.099+0000 7f248087c640 1 --2- 192.168.123.100:0/3264079491 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f2468003b60 0x7f2478077dd0 unknown :-1 s=CLOSED pgs=112 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:44.098 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.099+0000 7f248087c640 1 -- 192.168.123.100:0/3264079491 >> 192.168.123.100:0/3264079491 conn(0x7f2478082850 msgr2=0x7f247805c640 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:44.098 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.100+0000 7f248087c640 1 -- 192.168.123.100:0/3264079491 shutdown_connections 2026-03-20T11:45:44.098 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.100+0000 7f248087c640 1 -- 192.168.123.100:0/3264079491 wait complete. 2026-03-20T11:45:44.108 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool application enable default.rgw.buckets.data rgw --cluster ceph 2026-03-20T11:45:44.177 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.179+0000 7fc83d042640 1 Processor -- start 2026-03-20T11:45:44.177 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.179+0000 7fc83d042640 1 -- start start 2026-03-20T11:45:44.177 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.179+0000 7fc83d042640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7fc83805bfc0 0x7fc83805c390 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:44.177 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.179+0000 7fc83d042640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7fc838059010 con 0x7fc83805c8d0 2026-03-20T11:45:44.177 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.179+0000 7fc83d042640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7fc838058430 con 0x7fc83805bfc0 2026-03-20T11:45:44.177 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.179+0000 7fc836575640 1 --1- >> v1:192.168.123.100:6789/0 conn(0x7fc83805c8d0 0x7fc8381709d0 :-1 s=CONNECTING_WAIT_BANNER_AND_IDENTIFY pgs=0 cs=0 l=0).handle_server_banner_and_identify peer v1:192.168.123.100:6789/0 says I am v1:192.168.123.100:33078/0 (socket says 192.168.123.100:33078) 2026-03-20T11:45:44.177 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.179+0000 7fc836575640 1 -- 192.168.123.100:0/1726416461 learned_addr learned my addr 192.168.123.100:0/1726416461 (peer_addr_for_me v1:192.168.123.100:0/0) 2026-03-20T11:45:44.177 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.179+0000 7fc836d76640 1 --2- 192.168.123.100:0/1726416461 >> v2:192.168.123.100:3300/0 conn(0x7fc83805bfc0 0x7fc83805c390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:44.177 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.179+0000 7fc835d74640 1 -- 192.168.123.100:0/1726416461 <== mon.0 v1:192.168.123.100:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) ==== 33+0+0 (unknown 3981540195 0 0) 0x7fc838059010 con 0x7fc83805c8d0 2026-03-20T11:45:44.177 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.180+0000 7fc836d76640 1 -- 192.168.123.100:0/1726416461 >> v1:192.168.123.100:6789/0 conn(0x7fc83805c8d0 legacy=0x7fc8381709d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:44.177 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.180+0000 7fc836d76640 1 -- 192.168.123.100:0/1726416461 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc838059870 con 0x7fc83805bfc0 2026-03-20T11:45:44.178 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.180+0000 7fc836d76640 1 --2- 192.168.123.100:0/1726416461 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc83805bfc0 0x7fc83805c390 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7fc82c004770 tx=0x7fc82c02eda0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=2b7239128239a6b6 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:44.178 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.180+0000 7fc835d74640 1 -- 192.168.123.100:0/1726416461 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc82c03c070 con 0x7fc83805bfc0 2026-03-20T11:45:44.178 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.180+0000 7fc835d74640 1 -- 192.168.123.100:0/1726416461 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fc82c02f9e0 con 0x7fc83805bfc0 2026-03-20T11:45:44.178 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.180+0000 7fc835d74640 1 -- 192.168.123.100:0/1726416461 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc82c02fce0 con 0x7fc83805bfc0 2026-03-20T11:45:44.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.180+0000 7fc83d042640 1 -- 192.168.123.100:0/1726416461 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc83805bfc0 msgr2=0x7fc83805c390 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:44.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.180+0000 7fc83d042640 1 --2- 192.168.123.100:0/1726416461 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc83805bfc0 0x7fc83805c390 secure :-1 s=READY pgs=114 cs=0 l=1 rev1=1 crypto rx=0x7fc82c004770 tx=0x7fc82c02eda0 comp rx=0 tx=0).stop 2026-03-20T11:45:44.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.181+0000 7fc83d042640 1 -- 192.168.123.100:0/1726416461 shutdown_connections 2026-03-20T11:45:44.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.181+0000 7fc83d042640 1 --2- 192.168.123.100:0/1726416461 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc83805bfc0 0x7fc83805c390 unknown :-1 s=CLOSED pgs=114 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:44.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.181+0000 7fc83d042640 1 -- 192.168.123.100:0/1726416461 >> 192.168.123.100:0/1726416461 conn(0x7fc838082850 msgr2=0x7fc838082c50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:44.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.181+0000 7fc83d042640 1 -- 192.168.123.100:0/1726416461 shutdown_connections 2026-03-20T11:45:44.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.181+0000 7fc83d042640 1 -- 192.168.123.100:0/1726416461 wait complete. 2026-03-20T11:45:44.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.181+0000 7fc83d042640 1 Processor -- start 2026-03-20T11:45:44.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.181+0000 7fc83d042640 1 -- start start 2026-03-20T11:45:44.179 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.182+0000 7fc83d042640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc83805bfc0 0x7fc83810ad40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:44.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.182+0000 7fc83d042640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7fc8381720f0 con 0x7fc83805bfc0 2026-03-20T11:45:44.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.182+0000 7fc836d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc83805bfc0 0x7fc83810ad40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:44.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.182+0000 7fc836d76640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc83805bfc0 0x7fc83810ad40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36384/0 (socket says 192.168.123.100:36384) 2026-03-20T11:45:44.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.182+0000 7fc836d76640 1 -- 192.168.123.100:0/2125789888 learned_addr learned my addr 192.168.123.100:0/2125789888 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:44.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.182+0000 7fc836d76640 1 -- 192.168.123.100:0/2125789888 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7fc838116890 con 0x7fc83805bfc0 2026-03-20T11:45:44.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.182+0000 7fc836d76640 1 --2- 192.168.123.100:0/2125789888 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc83805bfc0 0x7fc83810ad40 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fc82c009880 tx=0x7fc82c0047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:44.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.182+0000 7fc8177fe640 1 -- 192.168.123.100:0/2125789888 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc82c046070 con 0x7fc83805bfc0 2026-03-20T11:45:44.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.182+0000 7fc8177fe640 1 -- 192.168.123.100:0/2125789888 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fc82c037ce0 con 0x7fc83805bfc0 2026-03-20T11:45:44.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.182+0000 7fc83d042640 1 -- 192.168.123.100:0/2125789888 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7fc83810d340 con 0x7fc83805bfc0 2026-03-20T11:45:44.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.182+0000 7fc8177fe640 1 -- 192.168.123.100:0/2125789888 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fc82c03c040 con 0x7fc83805bfc0 2026-03-20T11:45:44.180 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.182+0000 7fc83d042640 1 -- 192.168.123.100:0/2125789888 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7fc83810b280 con 0x7fc83805bfc0 2026-03-20T11:45:44.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.183+0000 7fc8177fe640 1 -- 192.168.123.100:0/2125789888 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7fc82c053020 con 0x7fc83805bfc0 2026-03-20T11:45:44.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.183+0000 7fc8177fe640 1 --2- 192.168.123.100:0/2125789888 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fc80403dc50 0x7fc80405e100 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:44.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.183+0000 7fc8177fe640 1 -- 192.168.123.100:0/2125789888 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(16..16 src has 1..16) ==== 3325+0+0 (secure 0 0 0) 0x7fc82c077610 con 0x7fc83805bfc0 2026-03-20T11:45:44.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.183+0000 7fc836575640 1 --2- 192.168.123.100:0/2125789888 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fc80403dc50 0x7fc80405e100 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:44.181 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.183+0000 7fc83d042640 1 -- 192.168.123.100:0/2125789888 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7fc83805c390 con 0x7fc83805bfc0 2026-03-20T11:45:44.182 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.184+0000 7fc836575640 1 --2- 192.168.123.100:0/2125789888 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fc80403dc50 0x7fc80405e100 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fc8200027e0 tx=0x7fc820007a20 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:44.184 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.186+0000 7fc8177fe640 1 -- 192.168.123.100:0/2125789888 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7fc82c049360 con 0x7fc83805bfc0 2026-03-20T11:45:44.306 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:44.308+0000 7fc83d042640 1 -- 192.168.123.100:0/2125789888 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool application enable", "pool": "default.rgw.buckets.data", "app": "rgw"} v 0) -- 0x7fc83810d110 con 0x7fc83805bfc0 2026-03-20T11:45:45.089 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.090+0000 7fc8177fe640 1 -- 192.168.123.100:0/2125789888 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "default.rgw.buckets.data", "app": "rgw"}]=0 enabled application 'rgw' on pool 'default.rgw.buckets.data' v17) ==== 185+0+0 (secure 0 0 0) 0x7fc82c046210 con 0x7fc83805bfc0 2026-03-20T11:45:45.089 INFO:teuthology.orchestra.run.vm00.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.data' 2026-03-20T11:45:45.102 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.102+0000 7fc83d042640 1 -- 192.168.123.100:0/2125789888 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fc80403dc50 msgr2=0x7fc80405e100 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:45.102 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.102+0000 7fc83d042640 1 --2- 192.168.123.100:0/2125789888 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fc80403dc50 0x7fc80405e100 secure :-1 s=READY pgs=36 cs=0 l=1 rev1=1 crypto rx=0x7fc8200027e0 tx=0x7fc820007a20 comp rx=0 tx=0).stop 2026-03-20T11:45:45.102 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.102+0000 7fc83d042640 1 -- 192.168.123.100:0/2125789888 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc83805bfc0 msgr2=0x7fc83810ad40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:45.102 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.102+0000 7fc83d042640 1 --2- 192.168.123.100:0/2125789888 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc83805bfc0 0x7fc83810ad40 secure :-1 s=READY pgs=115 cs=0 l=1 rev1=1 crypto rx=0x7fc82c009880 tx=0x7fc82c0047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:45.106 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.106+0000 7fc83d042640 1 -- 192.168.123.100:0/2125789888 shutdown_connections 2026-03-20T11:45:45.106 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.106+0000 7fc83d042640 1 --2- 192.168.123.100:0/2125789888 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fc80403dc50 0x7fc80405e100 unknown :-1 s=CLOSED pgs=36 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:45.106 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.106+0000 7fc83d042640 1 --2- 192.168.123.100:0/2125789888 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7fc83805bfc0 0x7fc83810ad40 unknown :-1 s=CLOSED pgs=115 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:45.106 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.106+0000 7fc83d042640 1 -- 192.168.123.100:0/2125789888 >> 192.168.123.100:0/2125789888 conn(0x7fc838082850 msgr2=0x7fc83805a300 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:45.106 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.108+0000 7fc83d042640 1 -- 192.168.123.100:0/2125789888 shutdown_connections 2026-03-20T11:45:45.108 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.110+0000 7fc83d042640 1 -- 192.168.123.100:0/2125789888 wait complete. 2026-03-20T11:45:45.130 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool create default.rgw.buckets.index 64 64 --cluster ceph 2026-03-20T11:45:45.207 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.209+0000 7efd87ac7640 1 Processor -- start 2026-03-20T11:45:45.207 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.209+0000 7efd87ac7640 1 -- start start 2026-03-20T11:45:45.207 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.209+0000 7efd87ac7640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7efd80137550 0x7efd80130a40 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:45.207 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.209+0000 7efd87ac7640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7efd8005aa90 con 0x7efd80137180 2026-03-20T11:45:45.207 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.209+0000 7efd87ac7640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7efd80059870 con 0x7efd80137550 2026-03-20T11:45:45.207 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.209+0000 7efd8503b640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7efd80137550 0x7efd80130a40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:45.207 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.209+0000 7efd8503b640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7efd80137550 0x7efd80130a40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36390/0 (socket says 192.168.123.100:36390) 2026-03-20T11:45:45.207 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.209+0000 7efd8503b640 1 -- 192.168.123.100:0/3329690507 learned_addr learned my addr 192.168.123.100:0/3329690507 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:45.207 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.209+0000 7efd8503b640 1 -- 192.168.123.100:0/3329690507 >> v1:192.168.123.100:6789/0 conn(0x7efd80137180 legacy=0x7efd80130330 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).mark_down 2026-03-20T11:45:45.207 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.210+0000 7efd8503b640 1 -- 192.168.123.100:0/3329690507 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7efd80137970 con 0x7efd80137550 2026-03-20T11:45:45.208 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.210+0000 7efd8503b640 1 --2- 192.168.123.100:0/3329690507 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd80137550 0x7efd80130a40 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7efd74009870 tx=0x7efd7402ee60 comp rx=0 tx=0).ready entity=mon.0 client_cookie=2d0c5788d65d1a1c server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:45.208 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.210+0000 7efd8483a640 1 -- 192.168.123.100:0/3329690507 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7efd7403c070 con 0x7efd80137550 2026-03-20T11:45:45.208 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.210+0000 7efd8483a640 1 -- 192.168.123.100:0/3329690507 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7efd7402faa0 con 0x7efd80137550 2026-03-20T11:45:45.208 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.210+0000 7efd8483a640 1 -- 192.168.123.100:0/3329690507 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7efd7402fda0 con 0x7efd80137550 2026-03-20T11:45:45.208 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.210+0000 7efd87ac7640 1 -- 192.168.123.100:0/3329690507 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd80137550 msgr2=0x7efd80130a40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:45.208 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.210+0000 7efd87ac7640 1 --2- 192.168.123.100:0/3329690507 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd80137550 0x7efd80130a40 secure :-1 s=READY pgs=117 cs=0 l=1 rev1=1 crypto rx=0x7efd74009870 tx=0x7efd7402ee60 comp rx=0 tx=0).stop 2026-03-20T11:45:45.208 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.211+0000 7efd87ac7640 1 -- 192.168.123.100:0/3329690507 shutdown_connections 2026-03-20T11:45:45.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.211+0000 7efd87ac7640 1 --2- 192.168.123.100:0/3329690507 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd80137550 0x7efd80130a40 unknown :-1 s=CLOSED pgs=117 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:45.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.211+0000 7efd87ac7640 1 -- 192.168.123.100:0/3329690507 >> 192.168.123.100:0/3329690507 conn(0x7efd80082850 msgr2=0x7efd80082c50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:45.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.211+0000 7efd87ac7640 1 -- 192.168.123.100:0/3329690507 shutdown_connections 2026-03-20T11:45:45.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.211+0000 7efd87ac7640 1 -- 192.168.123.100:0/3329690507 wait complete. 2026-03-20T11:45:45.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.211+0000 7efd87ac7640 1 Processor -- start 2026-03-20T11:45:45.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.211+0000 7efd87ac7640 1 -- start start 2026-03-20T11:45:45.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.211+0000 7efd87ac7640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd80137180 0x7efd80146120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:45.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.211+0000 7efd87ac7640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7efd80130f80 con 0x7efd80137180 2026-03-20T11:45:45.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.212+0000 7efd8583c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd80137180 0x7efd80146120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:45.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.212+0000 7efd8583c640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd80137180 0x7efd80146120 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36402/0 (socket says 192.168.123.100:36402) 2026-03-20T11:45:45.209 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.212+0000 7efd8583c640 1 -- 192.168.123.100:0/3132552119 learned_addr learned my addr 192.168.123.100:0/3132552119 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:45.210 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.212+0000 7efd8583c640 1 -- 192.168.123.100:0/3132552119 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7efd8007a9b0 con 0x7efd80137180 2026-03-20T11:45:45.210 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.212+0000 7efd8583c640 1 --2- 192.168.123.100:0/3132552119 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd80137180 0x7efd80146120 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7efd7000c450 tx=0x7efd7000c920 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:45.210 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.212+0000 7efd6e7fc640 1 -- 192.168.123.100:0/3132552119 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7efd70016020 con 0x7efd80137180 2026-03-20T11:45:45.210 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.212+0000 7efd6e7fc640 1 -- 192.168.123.100:0/3132552119 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7efd70005150 con 0x7efd80137180 2026-03-20T11:45:45.210 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.212+0000 7efd6e7fc640 1 -- 192.168.123.100:0/3132552119 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7efd70005430 con 0x7efd80137180 2026-03-20T11:45:45.210 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.212+0000 7efd87ac7640 1 -- 192.168.123.100:0/3132552119 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7efd8007a0c0 con 0x7efd80137180 2026-03-20T11:45:45.210 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.212+0000 7efd87ac7640 1 -- 192.168.123.100:0/3132552119 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7efd80079b70 con 0x7efd80137180 2026-03-20T11:45:45.211 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.213+0000 7efd6e7fc640 1 -- 192.168.123.100:0/3132552119 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7efd700068c0 con 0x7efd80137180 2026-03-20T11:45:45.211 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.213+0000 7efd6e7fc640 1 --2- 192.168.123.100:0/3132552119 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7efd5003dc00 0x7efd5005e0b0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:45.211 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.213+0000 7efd6e7fc640 1 -- 192.168.123.100:0/3132552119 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(17..17 src has 1..17) ==== 3336+0+0 (secure 0 0 0) 0x7efd70051af0 con 0x7efd80137180 2026-03-20T11:45:45.211 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.213+0000 7efd8503b640 1 --2- 192.168.123.100:0/3132552119 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7efd5003dc00 0x7efd5005e0b0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:45.211 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.213+0000 7efd8503b640 1 --2- 192.168.123.100:0/3132552119 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7efd5003dc00 0x7efd5005e0b0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7efd74004770 tx=0x7efd74033000 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:45.211 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.213+0000 7efd87ac7640 1 -- 192.168.123.100:0/3132552119 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7efd4c005180 con 0x7efd80137180 2026-03-20T11:45:45.214 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.216+0000 7efd6e7fc640 1 -- 192.168.123.100:0/3132552119 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7efd7001eda0 con 0x7efd80137180 2026-03-20T11:45:45.334 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:45.336+0000 7efd87ac7640 1 -- 192.168.123.100:0/3132552119 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool create", "pool": "default.rgw.buckets.index", "pg_num": 64, "pgp_num": 64} v 0) -- 0x7efd4c005470 con 0x7efd80137180 2026-03-20T11:45:46.093 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.094+0000 7efd6e7fc640 1 -- 192.168.123.100:0/3132552119 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool create", "pool": "default.rgw.buckets.index", "pg_num": 64, "pgp_num": 64}]=0 pool 'default.rgw.buckets.index' created v18) ==== 169+0+0 (secure 0 0 0) 0x7efd70050ad0 con 0x7efd80137180 2026-03-20T11:45:46.093 INFO:teuthology.orchestra.run.vm00.stderr:pool 'default.rgw.buckets.index' created 2026-03-20T11:45:46.099 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.101+0000 7efd87ac7640 1 -- 192.168.123.100:0/3132552119 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7efd5003dc00 msgr2=0x7efd5005e0b0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:46.099 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.101+0000 7efd87ac7640 1 --2- 192.168.123.100:0/3132552119 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7efd5003dc00 0x7efd5005e0b0 secure :-1 s=READY pgs=37 cs=0 l=1 rev1=1 crypto rx=0x7efd74004770 tx=0x7efd74033000 comp rx=0 tx=0).stop 2026-03-20T11:45:46.099 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.101+0000 7efd87ac7640 1 -- 192.168.123.100:0/3132552119 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd80137180 msgr2=0x7efd80146120 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:46.099 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.101+0000 7efd87ac7640 1 --2- 192.168.123.100:0/3132552119 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd80137180 0x7efd80146120 secure :-1 s=READY pgs=118 cs=0 l=1 rev1=1 crypto rx=0x7efd7000c450 tx=0x7efd7000c920 comp rx=0 tx=0).stop 2026-03-20T11:45:46.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.106+0000 7efd87ac7640 1 -- 192.168.123.100:0/3132552119 shutdown_connections 2026-03-20T11:45:46.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.106+0000 7efd87ac7640 1 --2- 192.168.123.100:0/3132552119 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7efd5003dc00 0x7efd5005e0b0 unknown :-1 s=CLOSED pgs=37 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:46.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.106+0000 7efd87ac7640 1 --2- 192.168.123.100:0/3132552119 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7efd80137180 0x7efd80146120 unknown :-1 s=CLOSED pgs=118 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:46.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.106+0000 7efd87ac7640 1 -- 192.168.123.100:0/3132552119 >> 192.168.123.100:0/3132552119 conn(0x7efd80082850 msgr2=0x7efd80056cf0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:46.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.106+0000 7efd87ac7640 1 -- 192.168.123.100:0/3132552119 shutdown_connections 2026-03-20T11:45:46.104 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.106+0000 7efd87ac7640 1 -- 192.168.123.100:0/3132552119 wait complete. 2026-03-20T11:45:46.116 DEBUG:teuthology.orchestra.run.vm00:> sudo ceph osd pool application enable default.rgw.buckets.index rgw --cluster ceph 2026-03-20T11:45:46.184 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.186+0000 7f83b3c13640 1 Processor -- start 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.186+0000 7f83b3c13640 1 -- start start 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.186+0000 7f83b3c13640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f83ac05bfc0 0x7f83ac05c390 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.186+0000 7f83b3c13640 1 -- --> v1:192.168.123.100:6789/0 -- auth(proto 0 30 bytes epoch 0) -- 0x7f83ac059010 con 0x7f83ac05c8d0 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.186+0000 7f83b3c13640 1 -- --> v2:192.168.123.100:3300/0 -- mon_getmap magic: 0 -- 0x7f83ac058430 con 0x7f83ac05bfc0 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.186+0000 7f83b1988640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f83ac05bfc0 0x7f83ac05c390 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.186+0000 7f83b1988640 1 --2- >> v2:192.168.123.100:3300/0 conn(0x7f83ac05bfc0 0x7f83ac05c390 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36416/0 (socket says 192.168.123.100:36416) 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.186+0000 7f83b1988640 1 -- 192.168.123.100:0/1610734410 learned_addr learned my addr 192.168.123.100:0/1610734410 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.186+0000 7f83b1988640 1 -- 192.168.123.100:0/1610734410 >> v1:192.168.123.100:6789/0 conn(0x7f83ac05c8d0 legacy=0x7f83ac1709d0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).mark_down 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.187+0000 7f83b1988640 1 -- 192.168.123.100:0/1610734410 --> v2:192.168.123.100:3300/0 -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f83ac059870 con 0x7f83ac05bfc0 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.187+0000 7f83b1988640 1 --2- 192.168.123.100:0/1610734410 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f83ac05bfc0 0x7f83ac05c390 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f839c009080 tx=0x7f839c02ee20 comp rx=0 tx=0).ready entity=mon.0 client_cookie=243052b3304a2929 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.187+0000 7f83b0986640 1 -- 192.168.123.100:0/1610734410 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f839c03c070 con 0x7f83ac05bfc0 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.187+0000 7f83b0986640 1 -- 192.168.123.100:0/1610734410 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f839c02fa10 con 0x7f83ac05bfc0 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.187+0000 7f83b0986640 1 -- 192.168.123.100:0/1610734410 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f839c02fd10 con 0x7f83ac05bfc0 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.187+0000 7f83b3c13640 1 -- 192.168.123.100:0/1610734410 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f83ac05bfc0 msgr2=0x7f83ac05c390 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:46.185 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.187+0000 7f83b3c13640 1 --2- 192.168.123.100:0/1610734410 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f83ac05bfc0 0x7f83ac05c390 secure :-1 s=READY pgs=120 cs=0 l=1 rev1=1 crypto rx=0x7f839c009080 tx=0x7f839c02ee20 comp rx=0 tx=0).stop 2026-03-20T11:45:46.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.188+0000 7f83b3c13640 1 -- 192.168.123.100:0/1610734410 shutdown_connections 2026-03-20T11:45:46.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.188+0000 7f83b3c13640 1 --2- 192.168.123.100:0/1610734410 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f83ac05bfc0 0x7f83ac05c390 unknown :-1 s=CLOSED pgs=120 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:46.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.188+0000 7f83b3c13640 1 -- 192.168.123.100:0/1610734410 >> 192.168.123.100:0/1610734410 conn(0x7f83ac082850 msgr2=0x7f83ac082c50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:46.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.188+0000 7f83b3c13640 1 -- 192.168.123.100:0/1610734410 shutdown_connections 2026-03-20T11:45:46.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.188+0000 7f83b3c13640 1 -- 192.168.123.100:0/1610734410 wait complete. 2026-03-20T11:45:46.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.188+0000 7f83b3c13640 1 Processor -- start 2026-03-20T11:45:46.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.188+0000 7f83b3c13640 1 -- start start 2026-03-20T11:45:46.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.188+0000 7f83b3c13640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f83ac05bfc0 0x7f83ac107980 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:46.186 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.188+0000 7f83b3c13640 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x7f83ac1720f0 con 0x7f83ac05bfc0 2026-03-20T11:45:46.187 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.189+0000 7f83b1988640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f83ac05bfc0 0x7f83ac107980 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:46.187 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.189+0000 7f83b1988640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f83ac05bfc0 0x7f83ac107980 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:36426/0 (socket says 192.168.123.100:36426) 2026-03-20T11:45:46.187 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.189+0000 7f83b1988640 1 -- 192.168.123.100:0/3375545339 learned_addr learned my addr 192.168.123.100:0/3375545339 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:45:46.187 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.189+0000 7f83b1988640 1 -- 192.168.123.100:0/3375545339 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x7f83ac109d10 con 0x7f83ac05bfc0 2026-03-20T11:45:46.187 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.189+0000 7f83b1988640 1 --2- 192.168.123.100:0/3375545339 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f83ac05bfc0 0x7f83ac107980 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f839c009940 tx=0x7f839c0047a0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:46.187 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.189+0000 7f839a7fc640 1 -- 192.168.123.100:0/3375545339 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f839c03c050 con 0x7f83ac05bfc0 2026-03-20T11:45:46.187 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.189+0000 7f839a7fc640 1 -- 192.168.123.100:0/3375545339 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f839c03d040 con 0x7f83ac05bfc0 2026-03-20T11:45:46.187 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.189+0000 7f839a7fc640 1 -- 192.168.123.100:0/3375545339 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f839c004030 con 0x7f83ac05bfc0 2026-03-20T11:45:46.187 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.189+0000 7f83b3c13640 1 -- 192.168.123.100:0/3375545339 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x7f83ac10a570 con 0x7f83ac05bfc0 2026-03-20T11:45:46.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.189+0000 7f83b3c13640 1 -- 192.168.123.100:0/3375545339 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x7f83ac114550 con 0x7f83ac05bfc0 2026-03-20T11:45:46.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.190+0000 7f839a7fc640 1 -- 192.168.123.100:0/3375545339 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f839c0041d0 con 0x7f83ac05bfc0 2026-03-20T11:45:46.188 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.190+0000 7f83b3c13640 1 -- 192.168.123.100:0/3375545339 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "get_command_descriptions"} v 0) -- 0x7f83ac075b20 con 0x7f83ac05bfc0 2026-03-20T11:45:46.190 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.190+0000 7f839a7fc640 1 --2- 192.168.123.100:0/3375545339 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f837c03db40 0x7f837c05dff0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:45:46.190 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.190+0000 7f839a7fc640 1 -- 192.168.123.100:0/3375545339 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(18..18 src has 1..18) ==== 3703+0+0 (secure 0 0 0) 0x7f839c077590 con 0x7f83ac05bfc0 2026-03-20T11:45:46.190 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.192+0000 7f83b1187640 1 --2- 192.168.123.100:0/3375545339 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f837c03db40 0x7f837c05dff0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:45:46.191 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.193+0000 7f839a7fc640 1 -- 192.168.123.100:0/3375545339 <== mon.0 v2:192.168.123.100:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0 v0) ==== 72+0+231577 (secure 0 0 0) 0x7f83ac075b20 con 0x7f83ac05bfc0 2026-03-20T11:45:46.191 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.193+0000 7f83b1187640 1 --2- 192.168.123.100:0/3375545339 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f837c03db40 0x7f837c05dff0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f83a00037a0 tx=0x7f83a0007b40 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:45:46.308 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:46.310+0000 7f83b3c13640 1 -- 192.168.123.100:0/3375545339 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_command({"prefix": "osd pool application enable", "pool": "default.rgw.buckets.index", "app": "rgw"} v 0) -- 0x7f83ac075d40 con 0x7f83ac05bfc0 2026-03-20T11:45:47.017 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:47.019+0000 7f839a7fc640 1 -- 192.168.123.100:0/3375545339 <== mon.0 v2:192.168.123.100:3300/0 7 ==== mon_command_ack([{"prefix": "osd pool application enable", "pool": "default.rgw.buckets.index", "app": "rgw"}]=0 enabled application 'rgw' on pool 'default.rgw.buckets.index' v19) ==== 187+0+0 (secure 0 0 0) 0x7f83ac075d40 con 0x7f83ac05bfc0 2026-03-20T11:45:47.017 INFO:teuthology.orchestra.run.vm00.stderr:enabled application 'rgw' on pool 'default.rgw.buckets.index' 2026-03-20T11:45:47.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:47.035+0000 7f83b3c13640 1 -- 192.168.123.100:0/3375545339 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f837c03db40 msgr2=0x7f837c05dff0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:47.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:47.035+0000 7f83b3c13640 1 --2- 192.168.123.100:0/3375545339 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f837c03db40 0x7f837c05dff0 secure :-1 s=READY pgs=38 cs=0 l=1 rev1=1 crypto rx=0x7f83a00037a0 tx=0x7f83a0007b40 comp rx=0 tx=0).stop 2026-03-20T11:45:47.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:47.035+0000 7f83b3c13640 1 -- 192.168.123.100:0/3375545339 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f83ac05bfc0 msgr2=0x7f83ac107980 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:45:47.033 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:47.035+0000 7f83b3c13640 1 --2- 192.168.123.100:0/3375545339 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f83ac05bfc0 0x7f83ac107980 secure :-1 s=READY pgs=121 cs=0 l=1 rev1=1 crypto rx=0x7f839c009940 tx=0x7f839c0047a0 comp rx=0 tx=0).stop 2026-03-20T11:45:47.037 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:47.038+0000 7f83b3c13640 1 -- 192.168.123.100:0/3375545339 shutdown_connections 2026-03-20T11:45:47.037 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:47.038+0000 7f83b3c13640 1 --2- 192.168.123.100:0/3375545339 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f837c03db40 0x7f837c05dff0 unknown :-1 s=CLOSED pgs=38 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:47.037 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:47.038+0000 7f83b3c13640 1 --2- 192.168.123.100:0/3375545339 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x7f83ac05bfc0 0x7f83ac107980 unknown :-1 s=CLOSED pgs=121 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:45:47.037 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:47.038+0000 7f83b3c13640 1 -- 192.168.123.100:0/3375545339 >> 192.168.123.100:0/3375545339 conn(0x7f83ac082850 msgr2=0x7f83ac114b20 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:45:47.037 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:47.039+0000 7f83b3c13640 1 -- 192.168.123.100:0/3375545339 shutdown_connections 2026-03-20T11:45:47.037 INFO:teuthology.orchestra.run.vm00.stderr:2026-03-20T11:45:47.039+0000 7f83b3c13640 1 -- 192.168.123.100:0/3375545339 wait complete. 2026-03-20T11:45:47.052 DEBUG:tasks.rgw:Pools created 2026-03-20T11:45:47.052 INFO:tasks.rgw:Starting rgw... 2026-03-20T11:45:47.052 INFO:tasks.rgw:rgw client.0 config is {'dns-name': ''} 2026-03-20T11:45:47.052 INFO:tasks.rgw:Using beast as radosgw frontend 2026-03-20T11:45:47.052 DEBUG:teuthology.orchestra.run.vm00:> sudo echo -n http://vm00.local:80 | sudo tee /home/ubuntu/cephtest/url_file 2026-03-20T11:45:47.076 INFO:teuthology.orchestra.run.vm00.stdout:http://vm00.local:80 2026-03-20T11:45:47.076 DEBUG:teuthology.orchestra.run.vm00:> sudo chown ceph /home/ubuntu/cephtest/url_file 2026-03-20T11:45:47.139 INFO:tasks.rgw.client.0:Restarting daemon 2026-03-20T11:45:47.139 DEBUG:teuthology.orchestra.run.vm00:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper term radosgw --rgw-frontends 'beast port=80' -n client.0 --cluster ceph -k /etc/ceph/ceph.client.0.keyring --log-file /var/log/ceph/rgw.ceph.client.0.log --rgw_ops_log_socket_path /home/ubuntu/cephtest/rgw.opslog.ceph.client.0.sock --rgw-dns-name vm00.local --foreground | sudo tee /var/log/ceph/rgw.ceph.client.0.stdout 2>&1 2026-03-20T11:45:47.181 INFO:tasks.rgw.client.0:Started 2026-03-20T11:45:47.181 INFO:tasks.rgw:Polling client.0 until it starts accepting connections on http://vm00.local:80/ 2026-03-20T11:45:47.181 DEBUG:teuthology.orchestra.run.vm00:> curl http://vm00.local:80/ 2026-03-20T11:45:47.198 INFO:teuthology.orchestra.run.vm00.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-20T11:45:47.198 INFO:teuthology.orchestra.run.vm00.stderr: Dload Upload Total Spent Left Speed 2026-03-20T11:45:47.202 DEBUG:teuthology.orchestra.run:got remote process result: 7 2026-03-20T11:45:47.202 INFO:teuthology.orchestra.run.vm00.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2026-03-20T11:45:47.202 INFO:teuthology.orchestra.run.vm00.stderr:curl: (7) Failed to connect to vm00.local port 80: Connection refused 2026-03-20T11:45:48.202 DEBUG:teuthology.orchestra.run.vm00:> curl http://vm00.local:80/ 2026-03-20T11:45:48.221 INFO:teuthology.orchestra.run.vm00.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-20T11:45:48.221 INFO:teuthology.orchestra.run.vm00.stderr: Dload Upload Total Spent Left Speed 2026-03-20T11:45:48.221 INFO:teuthology.orchestra.run.vm00.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2026-03-20T11:45:48.221 INFO:teuthology.orchestra.run.vm00.stderr:curl: (7) Failed to connect to vm00.local port 80: Connection refused 2026-03-20T11:45:48.221 DEBUG:teuthology.orchestra.run:got remote process result: 7 2026-03-20T11:45:50.223 DEBUG:teuthology.orchestra.run.vm00:> curl http://vm00.local:80/ 2026-03-20T11:45:50.241 INFO:teuthology.orchestra.run.vm00.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-20T11:45:50.241 INFO:teuthology.orchestra.run.vm00.stderr: Dload Upload Total Spent Left Speed 2026-03-20T11:45:50.241 INFO:teuthology.orchestra.run.vm00.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2026-03-20T11:45:50.241 INFO:teuthology.orchestra.run.vm00.stderr:curl: (7) Failed to connect to vm00.local port 80: Connection refused 2026-03-20T11:45:50.241 DEBUG:teuthology.orchestra.run:got remote process result: 7 2026-03-20T11:45:54.242 DEBUG:teuthology.orchestra.run.vm00:> curl http://vm00.local:80/ 2026-03-20T11:45:54.259 INFO:teuthology.orchestra.run.vm00.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-20T11:45:54.259 INFO:teuthology.orchestra.run.vm00.stderr: Dload Upload Total Spent Left Speed 2026-03-20T11:45:54.259 DEBUG:teuthology.orchestra.run:got remote process result: 7 2026-03-20T11:45:54.259 INFO:teuthology.orchestra.run.vm00.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 2026-03-20T11:45:54.259 INFO:teuthology.orchestra.run.vm00.stderr:curl: (7) Failed to connect to vm00.local port 80: Connection refused 2026-03-20T11:46:02.260 DEBUG:teuthology.orchestra.run.vm00:> curl http://vm00.local:80/ 2026-03-20T11:46:02.279 INFO:teuthology.orchestra.run.vm00.stderr: % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-20T11:46:02.279 INFO:teuthology.orchestra.run.vm00.stderr: Dload Upload Total Spent Left Speed 2026-03-20T11:46:02.280 INFO:teuthology.orchestra.run.vm00.stderr: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 187 0 187 0 0 182k 0 --:--:-- --:--:-- --:--:-- 182k 2026-03-20T11:46:02.281 INFO:teuthology.orchestra.run.vm00.stdout:anonymous 2026-03-20T11:46:02.281 INFO:teuthology.run_tasks:Running task workunit... 2026-03-20T11:46:02.284 INFO:tasks.workunit:Pulling workunits from ref 7b4fb1902b22ff4ea3f4ff6a953bc42198ebeffe 2026-03-20T11:46:02.285 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-20T11:46:02.285 DEBUG:teuthology.orchestra.run.vm00:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-20T11:46:02.336 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-20T11:46:02.336 INFO:teuthology.orchestra.run.vm00.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-20T11:46:02.336 DEBUG:teuthology.orchestra.run.vm00:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-20T11:46:02.392 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-20T11:46:02.392 DEBUG:teuthology.orchestra.run.vm00:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-20T11:46:02.448 INFO:tasks.workunit:timeout=3h 2026-03-20T11:46:02.448 INFO:tasks.workunit:cleanup=True 2026-03-20T11:46:02.448 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 7b4fb1902b22ff4ea3f4ff6a953bc42198ebeffe 2026-03-20T11:46:02.505 INFO:tasks.workunit.client.0.vm00.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr:Note: switching to '7b4fb1902b22ff4ea3f4ff6a953bc42198ebeffe'. 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr:state without impacting any branches by switching back to a branch. 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr: git switch -c 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr:Or undo this operation with: 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr: git switch - 2026-03-20T11:46:47.609 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:47.610 INFO:tasks.workunit.client.0.vm00.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-20T11:46:47.610 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:47.610 INFO:tasks.workunit.client.0.vm00.stderr:HEAD is now at 7b4fb1902b2 qa/tasks/tox: use uv python instead of system 2026-03-20T11:46:47.615 DEBUG:teuthology.orchestra.run.vm00:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-20T11:46:47.672 INFO:tasks.workunit.client.0.vm00.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-20T11:46:47.674 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-20T11:46:47.674 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-20T11:46:47.717 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-20T11:46:47.758 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-20T11:46:47.792 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-20T11:46:47.793 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-20T11:46:47.793 INFO:tasks.workunit.client.0.vm00.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-20T11:46:47.822 INFO:tasks.workunit.client.0.vm00.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-20T11:46:47.826 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-20T11:46:47.826 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-20T11:46:47.883 INFO:tasks.workunit:Running workunits matching rgw/test_rgw_orphan_list.sh on client.0... 2026-03-20T11:46:47.884 INFO:tasks.workunit:Running workunit rgw/test_rgw_orphan_list.sh... 2026-03-20T11:46:47.884 DEBUG:teuthology.orchestra.run.vm00:workunit test rgw/test_rgw_orphan_list.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=7b4fb1902b22ff4ea3f4ff6a953bc42198ebeffe TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh 2026-03-20T11:46:47.946 INFO:tasks.workunit.client.0.vm00.stdout:Fully Qualified Domain Name: vm00.local 2026-03-20T11:46:48.299 INFO:tasks.workunit.client.0.vm00.stdout:Last metadata expiration check: 0:02:22 ago on Fri 20 Mar 2026 11:44:26 AM UTC. 2026-03-20T11:46:48.376 INFO:tasks.workunit.client.0.vm00.stdout:Package s3cmd-2.4.0-1.el9.noarch is already installed. 2026-03-20T11:46:48.397 INFO:tasks.workunit.client.0.vm00.stdout:Dependencies resolved. 2026-03-20T11:46:48.397 INFO:tasks.workunit.client.0.vm00.stdout:Nothing to do. 2026-03-20T11:46:48.397 INFO:tasks.workunit.client.0.vm00.stdout:Complete! 2026-03-20T11:46:48.774 INFO:tasks.workunit.client.0.vm00.stdout:Last metadata expiration check: 0:02:22 ago on Fri 20 Mar 2026 11:44:26 AM UTC. 2026-03-20T11:46:48.853 INFO:tasks.workunit.client.0.vm00.stdout:Package python3-setuptools-53.0.0-15.el9.noarch is already installed. 2026-03-20T11:46:48.873 INFO:tasks.workunit.client.0.vm00.stdout:Dependencies resolved. 2026-03-20T11:46:48.873 INFO:tasks.workunit.client.0.vm00.stdout:Nothing to do. 2026-03-20T11:46:48.873 INFO:tasks.workunit.client.0.vm00.stdout:Complete! 2026-03-20T11:46:49.276 INFO:tasks.workunit.client.0.vm00.stdout:Last metadata expiration check: 0:02:23 ago on Fri 20 Mar 2026 11:44:26 AM UTC. 2026-03-20T11:46:49.350 INFO:tasks.workunit.client.0.vm00.stdout:Package python3-pip-21.3.1-1.el9.noarch is already installed. 2026-03-20T11:46:49.371 INFO:tasks.workunit.client.0.vm00.stdout:Dependencies resolved. 2026-03-20T11:46:49.372 INFO:tasks.workunit.client.0.vm00.stdout:Nothing to do. 2026-03-20T11:46:49.372 INFO:tasks.workunit.client.0.vm00.stdout:Complete! 2026-03-20T11:46:49.563 INFO:tasks.workunit.client.0.vm00.stdout:Requirement already satisfied: setuptools in /usr/lib/python3.9/site-packages (53.0.0) 2026-03-20T11:46:49.847 INFO:tasks.workunit.client.0.vm00.stdout:Collecting setuptools 2026-03-20T11:46:49.877 INFO:tasks.workunit.client.0.vm00.stdout: Downloading setuptools-82.0.1-py3-none-any.whl (1.0 MB) 2026-03-20T11:46:50.071 INFO:tasks.workunit.client.0.vm00.stdout:Installing collected packages: setuptools 2026-03-20T11:46:50.374 INFO:tasks.workunit.client.0.vm00.stdout:Successfully installed setuptools-82.0.1 2026-03-20T11:46:50.374 INFO:tasks.workunit.client.0.vm00.stderr:WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv 2026-03-20T11:46:50.651 INFO:tasks.workunit.client.0.vm00.stdout:Collecting python-swiftclient 2026-03-20T11:46:50.686 INFO:tasks.workunit.client.0.vm00.stdout: Downloading python_swiftclient-4.10.0-py3-none-any.whl (88 kB) 2026-03-20T11:46:50.709 INFO:tasks.workunit.client.0.vm00.stdout:Requirement already satisfied: requests>=2.4.0 in /usr/lib/python3.9/site-packages (from python-swiftclient) (2.25.1) 2026-03-20T11:46:50.720 INFO:tasks.workunit.client.0.vm00.stdout:Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3.9/site-packages (from requests>=2.4.0->python-swiftclient) (1.26.5) 2026-03-20T11:46:50.720 INFO:tasks.workunit.client.0.vm00.stdout:Requirement already satisfied: idna<3,>=2.5 in /usr/lib/python3.9/site-packages (from requests>=2.4.0->python-swiftclient) (2.10) 2026-03-20T11:46:50.720 INFO:tasks.workunit.client.0.vm00.stdout:Requirement already satisfied: chardet<5,>=3.0.2 in /usr/lib/python3.9/site-packages (from requests>=2.4.0->python-swiftclient) (4.0.0) 2026-03-20T11:46:50.803 INFO:tasks.workunit.client.0.vm00.stdout:Installing collected packages: python-swiftclient 2026-03-20T11:46:50.827 INFO:tasks.workunit.client.0.vm00.stderr: WARNING: The script swift is installed in '/usr/local/bin' which is not on PATH. 2026-03-20T11:46:50.827 INFO:tasks.workunit.client.0.vm00.stderr: Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. 2026-03-20T11:46:50.829 INFO:tasks.workunit.client.0.vm00.stdout:Successfully installed python-swiftclient-4.10.0 2026-03-20T11:46:50.829 INFO:tasks.workunit.client.0.vm00.stderr:WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv 2026-03-20T11:46:50.896 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.899+0000 7fa42511f900 1 Processor -- start 2026-03-20T11:46:50.897 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.899+0000 7fa42511f900 1 -- start start 2026-03-20T11:46:50.897 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.899+0000 7fa42511f900 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 0x5578e58d3a50 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.897 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.899+0000 7fa42511f900 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x5578e569c1d0 con 0x5578e578ae80 2026-03-20T11:46:50.897 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.900+0000 7fa420e5d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 0x5578e58d3a50 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.897 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.900+0000 7fa420e5d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 0x5578e58d3a50 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59146/0 (socket says 192.168.123.100:59146) 2026-03-20T11:46:50.897 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.900+0000 7fa420e5d640 1 -- 192.168.123.100:0/1707709714 learned_addr learned my addr 192.168.123.100:0/1707709714 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:46:50.897 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.900+0000 7fa420e5d640 1 -- 192.168.123.100:0/1707709714 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x5578e58d8310 con 0x5578e578ae80 2026-03-20T11:46:50.897 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.900+0000 7fa420e5d640 1 --2- 192.168.123.100:0/1707709714 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 0x5578e58d3a50 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7fa41800b3c0 tx=0x7fa41800b890 comp rx=0 tx=0).ready entity=mon.0 client_cookie=3fcc529eff5803b3 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.897 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.900+0000 7fa40affd640 1 -- 192.168.123.100:0/1707709714 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa418012020 con 0x5578e578ae80 2026-03-20T11:46:50.897 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.900+0000 7fa40affd640 1 -- 192.168.123.100:0/1707709714 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fa41800d5c0 con 0x5578e578ae80 2026-03-20T11:46:50.897 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.900+0000 7fa40affd640 1 -- 192.168.123.100:0/1707709714 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa41800d8a0 con 0x5578e578ae80 2026-03-20T11:46:50.898 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.901+0000 7fa42511f900 1 -- 192.168.123.100:0/1707709714 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 msgr2=0x5578e58d3a50 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.898 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.901+0000 7fa42511f900 1 --2- 192.168.123.100:0/1707709714 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 0x5578e58d3a50 secure :-1 s=READY pgs=131 cs=0 l=1 rev1=1 crypto rx=0x7fa41800b3c0 tx=0x7fa41800b890 comp rx=0 tx=0).stop 2026-03-20T11:46:50.898 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.901+0000 7fa42511f900 1 -- 192.168.123.100:0/1707709714 shutdown_connections 2026-03-20T11:46:50.898 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.901+0000 7fa42511f900 1 --2- 192.168.123.100:0/1707709714 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 0x5578e58d3a50 unknown :-1 s=CLOSED pgs=131 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.898 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.901+0000 7fa42511f900 1 -- 192.168.123.100:0/1707709714 >> 192.168.123.100:0/1707709714 conn(0x5578e57a2a20 msgr2=0x5578e58d75d0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:46:50.898 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.901+0000 7fa42511f900 1 -- 192.168.123.100:0/1707709714 shutdown_connections 2026-03-20T11:46:50.898 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.901+0000 7fa42511f900 1 -- 192.168.123.100:0/1707709714 wait complete. 2026-03-20T11:46:50.898 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.901+0000 7fa42511f900 1 Processor -- start 2026-03-20T11:46:50.898 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.901+0000 7fa42511f900 1 -- start start 2026-03-20T11:46:50.898 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.901+0000 7fa42511f900 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 0x5578e58cbbe0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.898 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.901+0000 7fa42511f900 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x5578e579e0e0 con 0x5578e578ae80 2026-03-20T11:46:50.899 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.902+0000 7fa420e5d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 0x5578e58cbbe0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.899 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.902+0000 7fa420e5d640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 0x5578e58cbbe0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59158/0 (socket says 192.168.123.100:59158) 2026-03-20T11:46:50.899 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.902+0000 7fa420e5d640 1 -- 192.168.123.100:0/3799469788 learned_addr learned my addr 192.168.123.100:0/3799469788 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:46:50.899 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.902+0000 7fa420e5d640 1 -- 192.168.123.100:0/3799469788 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x5578e58cdc40 con 0x5578e578ae80 2026-03-20T11:46:50.899 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.902+0000 7fa420e5d640 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 0x5578e58cbbe0 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7fa418000c00 tx=0x7fa418000f00 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.899 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.902+0000 7fa4097fa640 1 -- 192.168.123.100:0/3799469788 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa418012030 con 0x5578e578ae80 2026-03-20T11:46:50.899 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.902+0000 7fa4097fa640 1 -- 192.168.123.100:0/3799469788 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fa41800e950 con 0x5578e578ae80 2026-03-20T11:46:50.899 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.902+0000 7fa4097fa640 1 -- 192.168.123.100:0/3799469788 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa41800ec30 con 0x5578e578ae80 2026-03-20T11:46:50.899 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.902+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x5578e58ccce0 con 0x5578e578ae80 2026-03-20T11:46:50.899 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.902+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x5578e58ce190 con 0x5578e578ae80 2026-03-20T11:46:50.900 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.903+0000 7fa4097fa640 1 -- 192.168.123.100:0/3799469788 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7fa41800c040 con 0x5578e578ae80 2026-03-20T11:46:50.901 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.903+0000 7fa4097fa640 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa40003c450 0x7fa40005c900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.901 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.903+0000 7fa4097fa640 1 -- 192.168.123.100:0/3799469788 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(30..30 src has 1..30) ==== 5434+0+0 (secure 0 0 0) 0x7fa41801f070 con 0x5578e578ae80 2026-03-20T11:46:50.901 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.903+0000 7fa42511f900 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5578e5799120 0x5578e5947d00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.901 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.903+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:1 5.12 5:49953fa1:::default.realm:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5948240 con 0x5578e5799120 2026-03-20T11:46:50.901 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.903+0000 7fa413fff640 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa40003c450 0x7fa40005c900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.901 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.904+0000 7fa42165e640 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5578e5799120 0x5578e5947d00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.901 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.904+0000 7fa413fff640 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa40003c450 0x7fa40005c900 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x5578e579d7d0 tx=0x7fa40c064000 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.901 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.904+0000 7fa42165e640 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5578e5799120 0x5578e5947d00 crc :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.901 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.904+0000 7fa42165e640 1 -- 192.168.123.100:0/3799469788 <== osd.1 v2:192.168.123.100:6800/3952598619 1 ==== osd_op_reply(1 default.realm [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 157+0+0 (crc 0 0 0) 0x7fa414002040 con 0x5578e5799120 2026-03-20T11:46:50.901 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.904+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:2 5.f 5:f43b6ece:::zone_names.default:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e59485e0 con 0x5578e5799120 2026-03-20T11:46:50.901 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.904+0000 7fa42165e640 1 -- 192.168.123.100:0/3799469788 <== osd.1 v2:192.168.123.100:6800/3952598619 2 ==== osd_op_reply(2 zone_names.default [read 0~46 out=46b] v0'0 uv1 ondisk = 0) ==== 162+0+46 (crc 0 0 0) 0x7fa414002040 con 0x5578e5799120 2026-03-20T11:46:50.901 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.904+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:3 5.1d 5:bd648c13:::zone_info.9ebc77aa-cea4-46bc-ae79-a91c2622665b:head [call version.read in=11b,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5949990 con 0x5578e5799120 2026-03-20T11:46:50.902 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.905+0000 7fa42165e640 1 -- 192.168.123.100:0/3799469788 <== osd.1 v2:192.168.123.100:6800/3952598619 3 ==== osd_op_reply(3 zone_info.9ebc77aa-cea4-46bc-ae79-a91c2622665b [call out=48b,read 0~1060 out=1060b] v0'0 uv1 ondisk = 0) ==== 232+0+1108 (crc 0 0 0) 0x7fa414002040 con 0x5578e5799120 2026-03-20T11:46:50.902 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.905+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:4 5.f 5:f4c53578:::zonegroups_names.default:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5948e30 con 0x5578e5799120 2026-03-20T11:46:50.902 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.905+0000 7fa42165e640 1 -- 192.168.123.100:0/3799469788 <== osd.1 v2:192.168.123.100:6800/3952598619 4 ==== osd_op_reply(4 zonegroups_names.default [read 0~46 out=46b] v0'0 uv2 ondisk = 0) ==== 168+0+46 (crc 0 0 0) 0x7fa414002040 con 0x5578e5799120 2026-03-20T11:46:50.902 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.905+0000 7fa42511f900 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e594ad40 0x5578e596b120 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.902 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.905+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:5 5.17 5:ef670bd1:::zonegroup_info.99e38fc4-7684-4b79-8510-bfe8879a7ba0:head [call version.read in=11b,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e596b660 con 0x5578e594ad40 2026-03-20T11:46:50.902 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.905+0000 7fa420e5d640 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e594ad40 0x5578e596b120 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.902 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.905+0000 7fa420e5d640 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e594ad40 0x5578e596b120 crc :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.903 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.906+0000 7fa420e5d640 1 -- 192.168.123.100:0/3799469788 <== osd.2 v2:192.168.123.100:6816/2144187382 1 ==== osd_op_reply(5 zonegroup_info.99e38fc4-7684-4b79-8510-bfe8879a7ba0 [call out=48b,read 0~436 out=436b] v0'0 uv1 ondisk = 0) ==== 237+0+484 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e594ad40 2026-03-20T11:46:50.903 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.906+0000 7fa42511f900 1 Processor -- start 2026-03-20T11:46:50.903 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.906+0000 7fa42511f900 1 -- start start 2026-03-20T11:46:50.903 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.906+0000 7fa42511f900 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5983260 0x5578e5983630 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.903 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.906+0000 7fa42511f900 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x5578e56a6270 con 0x5578e5983260 2026-03-20T11:46:50.903 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.906+0000 7fa42165e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5983260 0x5578e5983630 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.903 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.906+0000 7fa42165e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5983260 0x5578e5983630 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59162/0 (socket says 192.168.123.100:59162) 2026-03-20T11:46:50.903 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.906+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 learned_addr learned my addr 192.168.123.100:0/2191240883 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:46:50.904 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.907+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x5578e5986080 con 0x5578e5983260 2026-03-20T11:46:50.904 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.907+0000 7fa42165e640 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5983260 0x5578e5983630 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x5578e58cceb0 tx=0x7fa414017820 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.904 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.907+0000 7fa3fe7fc640 1 -- 192.168.123.100:0/2191240883 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa414017200 con 0x5578e5983260 2026-03-20T11:46:50.904 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.907+0000 7fa3fe7fc640 1 -- 192.168.123.100:0/2191240883 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fa414013440 con 0x5578e5983260 2026-03-20T11:46:50.904 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.907+0000 7fa3fe7fc640 1 -- 192.168.123.100:0/2191240883 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa414013720 con 0x5578e5983260 2026-03-20T11:46:50.904 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.907+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x5578e5985a60 con 0x5578e5983260 2026-03-20T11:46:50.904 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.907+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x5578e5983d80 con 0x5578e5983260 2026-03-20T11:46:50.905 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.908+0000 7fa3fe7fc640 1 -- 192.168.123.100:0/2191240883 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7fa41402a020 con 0x5578e5983260 2026-03-20T11:46:50.905 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.908+0000 7fa3fe7fc640 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa3f003c450 0x7fa3f005c900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.905 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.908+0000 7fa3fe7fc640 1 -- 192.168.123.100:0/2191240883 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(30..30 src has 1..30) ==== 5434+0+0 (secure 0 0 0) 0x7fa41401d070 con 0x5578e5983260 2026-03-20T11:46:50.905 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.908+0000 7fa420e5d640 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa3f003c450 0x7fa3f005c900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.905 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.908+0000 7fa420e5d640 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa3f003c450 0x7fa3f005c900 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fa41802beb0 tx=0x7fa418020470 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.905 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.909+0000 7fa42511f900 1 Processor -- start 2026-03-20T11:46:50.905 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.909+0000 7fa42511f900 1 -- start start 2026-03-20T11:46:50.906 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.909+0000 7fa42511f900 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5a0ef60 0x5578e5a0f330 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.906 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.909+0000 7fa42511f900 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x5578e59d5930 con 0x5578e5a0ef60 2026-03-20T11:46:50.906 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.909+0000 7fa42165e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5a0ef60 0x5578e5a0f330 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.906 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.909+0000 7fa42165e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5a0ef60 0x5578e5a0f330 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59170/0 (socket says 192.168.123.100:59170) 2026-03-20T11:46:50.906 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.909+0000 7fa42165e640 1 -- 192.168.123.100:0/3973039928 learned_addr learned my addr 192.168.123.100:0/3973039928 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:46:50.906 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.909+0000 7fa42165e640 1 -- 192.168.123.100:0/3973039928 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x5578e59e51e0 con 0x5578e5a0ef60 2026-03-20T11:46:50.906 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.909+0000 7fa42165e640 1 --2- 192.168.123.100:0/3973039928 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5a0ef60 0x5578e5a0f330 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7fa4140450d0 tx=0x7fa414053770 comp rx=0 tx=0).ready entity=mon.0 client_cookie=1c3ebda0452c5085 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.906 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.909+0000 7fa3effff640 1 -- 192.168.123.100:0/3973039928 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa414051320 con 0x5578e5a0ef60 2026-03-20T11:46:50.906 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.909+0000 7fa3effff640 1 -- 192.168.123.100:0/3973039928 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fa41404b070 con 0x5578e5a0ef60 2026-03-20T11:46:50.906 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.909+0000 7fa3effff640 1 -- 192.168.123.100:0/3973039928 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa4140292f0 con 0x5578e5a0ef60 2026-03-20T11:46:50.907 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.910+0000 7fa42511f900 1 -- 192.168.123.100:0/3973039928 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5a0ef60 msgr2=0x5578e5a0f330 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.907 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.910+0000 7fa42511f900 1 --2- 192.168.123.100:0/3973039928 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5a0ef60 0x5578e5a0f330 secure :-1 s=READY pgs=134 cs=0 l=1 rev1=1 crypto rx=0x7fa4140450d0 tx=0x7fa414053770 comp rx=0 tx=0).stop 2026-03-20T11:46:50.907 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.910+0000 7fa42511f900 1 -- 192.168.123.100:0/3973039928 shutdown_connections 2026-03-20T11:46:50.907 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.910+0000 7fa42511f900 1 --2- 192.168.123.100:0/3973039928 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5a0ef60 0x5578e5a0f330 unknown :-1 s=CLOSED pgs=134 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.907 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.910+0000 7fa42511f900 1 -- 192.168.123.100:0/3973039928 >> 192.168.123.100:0/3973039928 conn(0x5578e59d3950 msgr2=0x5578e59e2930 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:46:50.907 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.910+0000 7fa42511f900 1 -- 192.168.123.100:0/3973039928 shutdown_connections 2026-03-20T11:46:50.907 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.910+0000 7fa42511f900 1 -- 192.168.123.100:0/3973039928 wait complete. 2026-03-20T11:46:50.907 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.910+0000 7fa42511f900 1 Processor -- start 2026-03-20T11:46:50.907 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.910+0000 7fa42511f900 1 -- start start 2026-03-20T11:46:50.907 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.911+0000 7fa42511f900 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5a0ef60 0x5578e59e6e40 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.907 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.911+0000 7fa42511f900 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x5578e59d4a30 con 0x5578e5a0ef60 2026-03-20T11:46:50.908 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.911+0000 7fa42165e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5a0ef60 0x5578e59e6e40 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.908 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.911+0000 7fa42165e640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5a0ef60 0x5578e59e6e40 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59176/0 (socket says 192.168.123.100:59176) 2026-03-20T11:46:50.908 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.911+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 learned_addr learned my addr 192.168.123.100:0/3437980365 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:46:50.908 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.911+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x5578e59ec2c0 con 0x5578e5a0ef60 2026-03-20T11:46:50.908 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.911+0000 7fa42165e640 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5a0ef60 0x5578e59e6e40 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7fa41400a2c0 tx=0x7fa414053450 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.908 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.911+0000 7fa3ee7fc640 1 -- 192.168.123.100:0/3437980365 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa41401db10 con 0x5578e5a0ef60 2026-03-20T11:46:50.908 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.911+0000 7fa3ee7fc640 1 -- 192.168.123.100:0/3437980365 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7fa41401dcb0 con 0x5578e5a0ef60 2026-03-20T11:46:50.908 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.911+0000 7fa3ee7fc640 1 -- 192.168.123.100:0/3437980365 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7fa41402aa50 con 0x5578e5a0ef60 2026-03-20T11:46:50.908 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.911+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x5578e59e94f0 con 0x5578e5a0ef60 2026-03-20T11:46:50.908 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.911+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x5578e59e8720 con 0x5578e5a0ef60 2026-03-20T11:46:50.909 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.912+0000 7fa3ee7fc640 1 -- 192.168.123.100:0/3437980365 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7fa41402abf0 con 0x5578e5a0ef60 2026-03-20T11:46:50.909 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.912+0000 7fa3ee7fc640 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa3e403c4a0 0x7fa3e405c950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.909 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.912+0000 7fa3ee7fc640 1 -- 192.168.123.100:0/3437980365 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(30..30 src has 1..30) ==== 5434+0+0 (secure 0 0 0) 0x7fa414034070 con 0x5578e5a0ef60 2026-03-20T11:46:50.909 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.912+0000 7fa420e5d640 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa3e403c4a0 0x7fa3e405c950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.909 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.912+0000 7fa420e5d640 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa3e403c4a0 0x7fa3e405c950 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x5578e596ba00 tx=0x7fa41804a000 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.910 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.913+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.910 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.913+0000 7fa42511f900 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5578e5a3f390 0x5578e5a5f840 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.910 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.913+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:1 5.12 5:49953fa1:::default.realm:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5a5fef0 con 0x5578e5a3f390 2026-03-20T11:46:50.910 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.913+0000 7fa413fff640 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5578e5a3f390 0x5578e5a5f840 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.910 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.913+0000 7fa413fff640 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5578e5a3f390 0x5578e5a5f840 crc :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.910 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.913+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 1 ==== osd_op_reply(1 default.realm [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 157+0+0 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 20 realm 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:2 5.12 5:49953fa1:::default.realm:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5a60880 con 0x5578e5a3f390 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 2 ==== osd_op_reply(2 default.realm [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 157+0+0 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:3 5.12 5:49953fa1:::default.realm:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5a61150 con 0x5578e5a3f390 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 3 ==== osd_op_reply(3 default.realm [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 157+0+0 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:4 5.f 5:f43b6ece:::zone_names.default:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5a614f0 con 0x5578e5a3f390 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 4 ==== osd_op_reply(4 zone_names.default [read 0~46 out=46b] v0'0 uv1 ondisk = 0) ==== 162+0+46 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.911 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:5 5.1d 5:bd648c13:::zone_info.9ebc77aa-cea4-46bc-ae79-a91c2622665b:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a3f390 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 5 ==== osd_op_reply(5 zone_info.9ebc77aa-cea4-46bc-ae79-a91c2622665b [read 0~1060 out=1060b] v0'0 uv1 ondisk = 0) ==== 190+0+1060 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 20 rados_obj.operate() r=0 bl.length=1060 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 20 searching for the correct realm 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e5a63ab0 0x5578e5a83e90 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:6 5.0 5:00000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a843d0 con 0x5578e5a63ab0 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.914+0000 7fa42165e640 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e5a63ab0 0x5578e5a83e90 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.915+0000 7fa42165e640 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e5a63ab0 0x5578e5a83e90 crc :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.915+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 1 ==== osd_op_reply(6 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.915+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:7 5.10 5:08000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a843d0 con 0x5578e5a63ab0 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.915+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 2 ==== osd_op_reply(7 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.915+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:8 5.8 5:10000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a843d0 con 0x5578e5a63ab0 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.915+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 3 ==== osd_op_reply(8 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.915+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:9 5.18 5:18000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a843d0 con 0x5578e5a3f390 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.915+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 6 ==== osd_op_reply(9 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.915+0000 7fa413fff640 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7fa40c068070 0x7fa40c072460 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.915+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:10 5.4 5:20000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x7fa40c068070 2026-03-20T11:46:50.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.915+0000 7fa420e5d640 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7fa40c068070 0x7fa40c072460 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.913 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.916+0000 7fa420e5d640 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7fa40c068070 0x7fa40c072460 crc :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.913 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.916+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 1 ==== osd_op_reply(10 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.913 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.916+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:11 5.14 5:28000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x7fa40c068070 2026-03-20T11:46:50.913 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.916+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 2 ==== osd_op_reply(11 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.913 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.916+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:12 5.c 5:30000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a3f390 2026-03-20T11:46:50.913 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.916+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 7 ==== osd_op_reply(12 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.913 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.916+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:13 5.1c 5:38000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a63ab0 2026-03-20T11:46:50.913 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.916+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 4 ==== osd_op_reply(13 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.913 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.916+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:14 5.2 5:40000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x7fa40c068070 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 3 ==== osd_op_reply(14 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:15 5.12 5:48000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a3f390 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 8 ==== osd_op_reply(15 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:16 5.a 5:50000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a63ab0 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 5 ==== osd_op_reply(16 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:17 5.1a 5:58000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a3f390 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 9 ==== osd_op_reply(17 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:18 5.6 5:60000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a63ab0 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 6 ==== osd_op_reply(18 [pgnls start_epoch 30 out=79b] v21'1 uv1 ondisk = 1) ==== 144+0+79 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:19 5.16 5:68000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a3f390 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 10 ==== osd_op_reply(19 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:20 5.e 5:70000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a63ab0 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 7 ==== osd_op_reply(20 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.914 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.917+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:21 5.1e 5:78000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x7fa40c068070 2026-03-20T11:46:50.915 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.918+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 4 ==== osd_op_reply(21 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.915 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.918+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:22 5.1 5:80000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a3f390 2026-03-20T11:46:50.915 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.918+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 11 ==== osd_op_reply(22 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.915 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.918+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:23 5.11 5:88000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a3f390 2026-03-20T11:46:50.915 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.918+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 12 ==== osd_op_reply(23 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.915 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.918+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:24 5.9 5:90000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a3f390 2026-03-20T11:46:50.915 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.918+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 13 ==== osd_op_reply(24 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.915 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.918+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:25 5.19 5:98000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a3f390 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.918+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 14 ==== osd_op_reply(25 [pgnls start_epoch 30 out=74b] v21'1 uv1 ondisk = 1) ==== 144+0+74 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.918+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:26 5.5 5:a0000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x7fa40c068070 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.918+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 5 ==== osd_op_reply(26 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.918+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:27 5.15 5:a8000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x7fa40c068070 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 6 ==== osd_op_reply(27 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:28 5.d 5:b0000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a63ab0 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 8 ==== osd_op_reply(28 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:29 5.1d 5:b8000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a3f390 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 15 ==== osd_op_reply(29 [pgnls start_epoch 30 out=107b] v21'1 uv1 ondisk = 1) ==== 144+0+107 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:30 5.3 5:c0000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x7fa40c068070 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 7 ==== osd_op_reply(30 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:31 5.13 5:c8000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a3f390 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 16 ==== osd_op_reply(31 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa40c0687c0 con 0x5578e5a3f390 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:32 5.b 5:d0000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a63ab0 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 9 ==== osd_op_reply(32 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:33 5.1b 5:d8000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a63ab0 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 10 ==== osd_op_reply(33 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.916 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.919+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:34 5.7 5:e0000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x7fa40c068070 2026-03-20T11:46:50.917 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 8 ==== osd_op_reply(34 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.917 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:35 5.17 5:e8000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a63ab0 2026-03-20T11:46:50.917 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 11 ==== osd_op_reply(35 [pgnls start_epoch 30 out=112b] v21'1 uv1 ondisk = 1) ==== 144+0+112 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.917 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:36 5.f 5:f0000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a3f390 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 17 ==== osd_op_reply(36 [pgnls start_epoch 30 out=115b] v21'2 uv2 ondisk = 1) ==== 144+0+115 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:37 5.1f 5:f8000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5578e5a61d40 con 0x5578e5a63ab0 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 12 ==== osd_op_reply(37 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42511f900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42511f900 20 RGWRados::pool_iterate: got default.zone. 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42511f900 20 RGWRados::pool_iterate: got zone_info.9ebc77aa-cea4-46bc-ae79-a91c2622665b 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42511f900 20 RGWRados::pool_iterate: got zonegroup_info.99e38fc4-7684-4b79-8510-bfe8879a7ba0 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42511f900 20 RGWRados::pool_iterate: got zone_names.default 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42511f900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:38 5.12 5:49953fa1:::default.realm:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5a62590 con 0x5578e5a3f390 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 18 ==== osd_op_reply(38 default.realm [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 157+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42511f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.920+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:39 5.f 5:f4c53578:::zonegroups_names.default:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5a62980 con 0x5578e5a3f390 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 19 ==== osd_op_reply(39 zonegroups_names.default [read 0~46 out=46b] v0'0 uv2 ondisk = 0) ==== 168+0+46 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:40 5.17 5:ef670bd1:::zonegroup_info.99e38fc4-7684-4b79-8510-bfe8879a7ba0:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5a62980 con 0x5578e5a63ab0 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 13 ==== osd_op_reply(40 zonegroup_info.99e38fc4-7684-4b79-8510-bfe8879a7ba0 [read 0~436 out=436b] v0'0 uv1 ondisk = 0) ==== 195+0+436 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 20 rados_obj.operate() r=0 bl.length=436 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 20 zone default found 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 4 Realm: () 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 4 ZoneGroup: default (99e38fc4-7684-4b79-8510-bfe8879a7ba0) 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 4 Zone: default (9ebc77aa-cea4-46bc-ae79-a91c2622665b) 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 10 cannot find current period zonegroup using local zonegroup configuration 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 20 zonegroup default 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:41 5.3 5:c52100b6:::period_config.default:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5a84800 con 0x7fa40c068070 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 9 ==== osd_op_reply(41 period_config.default [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 165+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:42 6.2 6:4347d321:::bucket.sync-source-hints.:head [call version.read in=11b,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5a86590 con 0x5578e5a3f390 2026-03-20T11:46:50.919 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 20 ==== osd_op_reply(42 bucket.sync-source-hints. [call,read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 211+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.920 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:50.920 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.920 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:43 6.b 6:d467b91b:::bucket.sync-target-hints.:head [call version.read in=11b,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5a86590 con 0x5578e5a3f390 2026-03-20T11:46:50.920 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.921+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 21 ==== osd_op_reply(43 bucket.sync-target-hints. [call,read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 211+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.920 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.922+0000 7fa42511f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:50.921 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.922+0000 7fa42511f900 20 started sync module instance, tier type = 2026-03-20T11:46:50.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.922+0000 7fa42511f900 20 started zone id=9ebc77aa-cea4-46bc-ae79-a91c2622665b (name=default) with tier type = 2026-03-20T11:46:50.922 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.922+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:44 7.1f 7:f95f44c2:::notify.0:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b87190 con 0x7fa40c068070 2026-03-20T11:46:50.923 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.922+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:45 7.0 7:05bf5b68:::notify.1:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b87d80 con 0x5578e5a3f390 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.922+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:46 7.15 7:a93a5511:::notify.2:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b88970 con 0x5578e5a63ab0 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.922+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:47 7.e 7:7759931f:::notify.3:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b89560 con 0x5578e5a63ab0 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.922+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:48 7.d 7:b4812045:::notify.4:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b8a150 con 0x5578e5a3f390 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.922+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:49 7.3 7:c609908c:::notify.5:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b8acf0 con 0x7fa40c068070 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.922+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:50 7.14 7:2b04a3e9:::notify.6:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b8b860 con 0x5578e5a3f390 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.922+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:51 7.9 7:93e5b521:::notify.7:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b8c130 con 0x7fa40c068070 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.923+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 22 ==== osd_op_reply(45 notify.1 [create] v30'3 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.924+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:52 7.0 7:05bf5b68:::notify.1:head [watch watch cookie 93977738508368] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b8a150 con 0x5578e5a3f390 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.924+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 14 ==== osd_op_reply(47 notify.3 [create] v30'3 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.924+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 15 ==== osd_op_reply(46 notify.2 [create] v30'3 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.924+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 23 ==== osd_op_reply(50 notify.6 [create] v30'3 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.924+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 24 ==== osd_op_reply(48 notify.4 [create] v30'3 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.924+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 10 ==== osd_op_reply(44 notify.0 [create] v30'3 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.924+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:53 7.e 7:7759931f:::notify.3:head [watch watch cookie 93977738511424] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b8b860 con 0x5578e5a63ab0 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.925+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:54 7.15 7:a93a5511:::notify.2:head [watch watch cookie 93977738505376] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b8c980 con 0x5578e5a63ab0 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.925+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 11 ==== osd_op_reply(49 notify.5 [create] v30'3 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.925+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 12 ==== osd_op_reply(51 notify.7 [create] v30'3 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.925+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 16 ==== osd_op_reply(53 notify.3 [watch watch cookie 93977738511424] v30'4 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.925+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:55 7.14 7:2b04a3e9:::notify.6:head [watch watch cookie 93977738520512] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b8c980 con 0x5578e5a3f390 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.925+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:56 7.1f 7:f95f44c2:::notify.0:head [watch watch cookie 93977738517536] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b8d250 con 0x7fa40c068070 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.925+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:57 7.d 7:b4812045:::notify.4:head [watch watch cookie 93977738533072] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b8e400 con 0x5578e5a3f390 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.925+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:58 7.3 7:c609908c:::notify.5:head [watch watch cookie 93977738537648] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b8f600 con 0x7fa40c068070 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.925+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:59 7.9 7:93e5b521:::notify.7:head [watch watch cookie 93977738542256] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b90800 con 0x7fa40c068070 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.925+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 17 ==== osd_op_reply(54 notify.2 [watch watch cookie 93977738505376] v30'4 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.926+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 25 ==== osd_op_reply(52 notify.1 [watch watch cookie 93977738508368] v30'4 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.926+0000 7fa42511f900 20 add_watcher() i=3 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.926+0000 7fa42511f900 20 add_watcher() i=2 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.926+0000 7fa42511f900 20 add_watcher() i=1 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.927+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 26 ==== osd_op_reply(55 notify.6 [watch watch cookie 93977738520512] v30'4 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.927+0000 7fa42511f900 20 add_watcher() i=6 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.927+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 27 ==== osd_op_reply(57 notify.4 [watch watch cookie 93977738533072] v30'4 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.927+0000 7fa42511f900 20 add_watcher() i=4 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.927+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 13 ==== osd_op_reply(58 notify.5 [watch watch cookie 93977738537648] v30'4 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.927+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 14 ==== osd_op_reply(56 notify.0 [watch watch cookie 93977738517536] v30'4 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.927+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 15 ==== osd_op_reply(59 notify.7 [watch watch cookie 93977738542256] v30'4 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.927+0000 7fa42511f900 20 add_watcher() i=5 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.927+0000 7fa42511f900 20 add_watcher() i=0 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.927+0000 7fa42511f900 20 add_watcher() i=7 2026-03-20T11:46:50.924 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.927+0000 7fa42511f900 2 all 8 watchers are set, enabling cache 2026-03-20T11:46:50.925 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.927+0000 7fa412ffd640 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7fa3b00071b0 0x7fa3b0027660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.925 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.927+0000 7fa412ffd640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:1 6.4 6:22d26bf9:::data_loggenerations_metadata:head [call version.check_conds in=74b,call version.read in=11b,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b0028fb0 con 0x7fa3b00071b0 2026-03-20T11:46:50.925 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.928+0000 7fa413fff640 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7fa3b00071b0 0x7fa3b0027660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.925 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.928+0000 7fa413fff640 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7fa3b00071b0 0x7fa3b0027660 crc :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.925 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.928+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 1 ==== osd_op_reply(1 data_loggenerations_metadata [call,call out=48b,read 0~28 out=28b] v0'0 uv1 ondisk = 0) ==== 256+0+76 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.925 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.928+0000 7fa4230e8640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:2 6.4 6:22d26bf9:::data_loggenerations_metadata:head [watch watch cookie 140341210518576] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x7fa3bc002030 con 0x7fa3b00071b0 2026-03-20T11:46:50.926 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.929+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 2 ==== osd_op_reply(2 data_loggenerations_metadata [watch watch cookie 140341210518576] v30'24 uv1 ondisk = 0) ==== 172+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.926 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.929+0000 7fa42511f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T11:46:50.926 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.929+0000 7fa42511f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T11:46:50.926 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.929+0000 7fa42511f900 5 note: GC not initialized 2026-03-20T11:46:50.926 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.929+0000 7fa3ce7e4640 20 reqs_thread_entry: start 2026-03-20T11:46:50.926 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.929+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:3 6.e 6:74abc724:restore::restore.0:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5b1a550 con 0x7fa3b00071b0 2026-03-20T11:46:50.927 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.930+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 3 ==== osd_op_reply(3 restore.0 [call] v30'21 uv12 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.927 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.930+0000 7fa410ff9640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:4 6.e 6:74abc724:restore::restore.0:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3dc006780 con 0x7fa3b00071b0 2026-03-20T11:46:50.927 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.930+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 4 ==== osd_op_reply(4 restore.0 [call out=166b] v0'0 uv12 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.927 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.930+0000 7fa42511f900 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e5b1bf50 0x5578e5b19940 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.927 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.930+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:5 6.14 6:293d40bf:restore::restore.1:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5b19e80 con 0x5578e5b1bf50 2026-03-20T11:46:50.927 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.931+0000 7fa42165e640 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e5b1bf50 0x5578e5b19940 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.928 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.931+0000 7fa42165e640 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e5b1bf50 0x5578e5b19940 crc :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.929 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.932+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 1 ==== osd_op_reply(5 restore.1 [call] v30'20 uv11 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.929 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.932+0000 7fa4137fe640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:6 6.14 6:293d40bf:restore::restore.1:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3e8004460 con 0x5578e5b1bf50 2026-03-20T11:46:50.929 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.932+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 2 ==== osd_op_reply(6 restore.1 [call out=166b] v0'0 uv11 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.929 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.932+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:7 6.13 6:cc734541:restore::restore.2:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5b1a250 con 0x5578e5b1bf50 2026-03-20T11:46:50.930 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.933+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 3 ==== osd_op_reply(7 restore.2 [call] v30'15 uv10 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.930 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.933+0000 7fa412ffd640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:8 6.13 6:cc734541:restore::restore.2:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b0028f80 con 0x5578e5b1bf50 2026-03-20T11:46:50.930 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.934+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 4 ==== osd_op_reply(8 restore.2 [call out=166b] v0'0 uv10 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.930 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.934+0000 7fa42511f900 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x5578e5ae6500 0x5578e5ae6900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:50.931 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.934+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:9 6.0 6:03a53c4b:restore::restore.3:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5ae6e40 con 0x5578e5ae6500 2026-03-20T11:46:50.931 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.934+0000 7fa420e5d640 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x5578e5ae6500 0x5578e5ae6900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:50.931 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.934+0000 7fa420e5d640 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x5578e5ae6500 0x5578e5ae6900 crc :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:50.932 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.935+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 1 ==== osd_op_reply(9 restore.3 [call] v30'28 uv17 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.932 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.935+0000 7fa411ffb640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:10 6.0 6:03a53c4b:restore::restore.3:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b800bb70 con 0x5578e5ae6500 2026-03-20T11:46:50.932 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.935+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 2 ==== osd_op_reply(10 restore.3 [call out=166b] v0'0 uv17 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.932 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.935+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:11 6.2 6:4485ab68:restore::restore.4:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5ae76c0 con 0x7fa3b00071b0 2026-03-20T11:46:50.933 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.936+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 5 ==== osd_op_reply(11 restore.4 [call] v30'23 uv18 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.933 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.936+0000 7fa410ff9640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:12 6.2 6:4485ab68:restore::restore.4:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3dc0067b0 con 0x7fa3b00071b0 2026-03-20T11:46:50.933 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.936+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 6 ==== osd_op_reply(12 restore.4 [call out=166b] v0'0 uv18 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.933 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.936+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:13 6.0 6:04e06ead:restore::restore.5:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5ae7f60 con 0x5578e5ae6500 2026-03-20T11:46:50.934 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.937+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 3 ==== osd_op_reply(13 restore.5 [call] v30'29 uv19 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.934 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.937+0000 7fa4137fe640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:14 6.0 6:04e06ead:restore::restore.5:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3e80044a0 con 0x5578e5ae6500 2026-03-20T11:46:50.934 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.937+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 4 ==== osd_op_reply(14 restore.5 [call out=166b] v0'0 uv19 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.934 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.937+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:15 6.19 6:99dcebbc:restore::restore.6:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5ae8800 con 0x5578e5ae6500 2026-03-20T11:46:50.935 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.938+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 5 ==== osd_op_reply(15 restore.6 [call] v30'22 uv9 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.935 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.938+0000 7fa412ffd640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:16 6.19 6:99dcebbc:restore::restore.6:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b0028f80 con 0x5578e5ae6500 2026-03-20T11:46:50.935 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.938+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 6 ==== osd_op_reply(16 restore.6 [call out=166b] v0'0 uv9 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.935 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.938+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:17 6.1e 6:7f8df977:restore::restore.7:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5ae90a0 con 0x7fa3b00071b0 2026-03-20T11:46:50.936 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.939+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 7 ==== osd_op_reply(17 restore.7 [call] v30'14 uv5 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.936 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.939+0000 7fa411ffb640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:18 6.1e 6:7f8df977:restore::restore.7:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b800d120 con 0x7fa3b00071b0 2026-03-20T11:46:50.936 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.939+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 8 ==== osd_op_reply(18 restore.7 [call out=166b] v0'0 uv5 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.936 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.939+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:19 6.e 6:7569ea81:restore::restore.8:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5ae9940 con 0x7fa3b00071b0 2026-03-20T11:46:50.937 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.940+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 9 ==== osd_op_reply(19 restore.8 [call] v30'22 uv14 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.937 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.940+0000 7fa410ff9640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:20 6.e 6:7569ea81:restore::restore.8:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3dc007e40 con 0x7fa3b00071b0 2026-03-20T11:46:50.937 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.940+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 10 ==== osd_op_reply(20 restore.8 [call out=166b] v0'0 uv14 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.937 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.941+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:21 6.7 6:e779991c:restore::restore.9:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aea1e0 con 0x5578e5ae6500 2026-03-20T11:46:50.938 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.941+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 7 ==== osd_op_reply(21 restore.9 [call] v30'20 uv15 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.938 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.941+0000 7fa4137fe640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:22 6.7 6:e779991c:restore::restore.9:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3e8004460 con 0x5578e5ae6500 2026-03-20T11:46:50.938 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.942+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 8 ==== osd_op_reply(22 restore.9 [call out=166b] v0'0 uv15 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.939 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.942+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:23 6.12 6:4c8eca8b:restore::restore.10:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aeaac0 con 0x5578e5ae6500 2026-03-20T11:46:50.939 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.942+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 9 ==== osd_op_reply(23 restore.10 [call] v30'13 uv6 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.939 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.942+0000 7fa412ffd640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:24 6.12 6:4c8eca8b:restore::restore.10:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b0028f80 con 0x5578e5ae6500 2026-03-20T11:46:50.940 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.943+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 10 ==== osd_op_reply(24 restore.10 [call out=168b] v0'0 uv6 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.940 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.943+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:25 6.0 6:01ff4341:restore::restore.11:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aeb3c0 con 0x5578e5ae6500 2026-03-20T11:46:50.940 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.943+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 11 ==== osd_op_reply(25 restore.11 [call] v30'30 uv21 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.940 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.943+0000 7fa411ffb640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:26 6.0 6:01ff4341:restore::restore.11:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b800d120 con 0x5578e5ae6500 2026-03-20T11:46:50.940 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.944+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 12 ==== osd_op_reply(26 restore.11 [call out=168b] v0'0 uv21 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.941 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.944+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:27 6.11 6:89a402d8:restore::restore.12:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aebcc0 con 0x5578e5b1bf50 2026-03-20T11:46:50.941 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.945+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 5 ==== osd_op_reply(27 restore.12 [call] v30'18 uv11 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.941 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.945+0000 7fa410ff9640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:28 6.11 6:89a402d8:restore::restore.12:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3dc007e30 con 0x5578e5b1bf50 2026-03-20T11:46:50.942 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.945+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 6 ==== osd_op_reply(28 restore.12 [call out=168b] v0'0 uv11 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.942 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.945+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:29 6.5 6:a6ec72c6:restore::restore.13:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aebcc0 con 0x5578e5ae6500 2026-03-20T11:46:50.943 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.946+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 13 ==== osd_op_reply(29 restore.13 [call] v30'19 uv13 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.943 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.946+0000 7fa4137fe640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:30 6.5 6:a6ec72c6:restore::restore.13:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3e8004420 con 0x5578e5ae6500 2026-03-20T11:46:50.943 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.946+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 14 ==== osd_op_reply(30 restore.13 [call out=168b] v0'0 uv13 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.943 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.946+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:31 6.f 6:f5d18734:restore::restore.14:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aebbe0 con 0x5578e5b1bf50 2026-03-20T11:46:50.944 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.947+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 7 ==== osd_op_reply(31 restore.14 [call] v30'19 uv10 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.944 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.947+0000 7fa412ffd640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:32 6.f 6:f5d18734:restore::restore.14:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b0028f80 con 0x5578e5b1bf50 2026-03-20T11:46:50.944 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.947+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 8 ==== osd_op_reply(32 restore.14 [call out=168b] v0'0 uv10 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.944 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.947+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:33 6.2 6:476e3e28:restore::restore.15:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aec510 con 0x7fa3b00071b0 2026-03-20T11:46:50.945 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.948+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 11 ==== osd_op_reply(33 restore.15 [call] v30'24 uv20 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.945 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.948+0000 7fa411ffb640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:34 6.2 6:476e3e28:restore::restore.15:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b800d120 con 0x7fa3b00071b0 2026-03-20T11:46:50.945 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.948+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 12 ==== osd_op_reply(34 restore.15 [call out=168b] v0'0 uv20 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.945 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.949+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:35 6.1c 6:3fd0a735:restore::restore.16:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aece20 con 0x7fa3b00071b0 2026-03-20T11:46:50.946 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.949+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 13 ==== osd_op_reply(35 restore.16 [call] v30'22 uv17 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.946 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.949+0000 7fa410ff9640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:36 6.1c 6:3fd0a735:restore::restore.16:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3dc0082a0 con 0x7fa3b00071b0 2026-03-20T11:46:50.946 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.950+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 14 ==== osd_op_reply(36 restore.16 [call out=168b] v0'0 uv17 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.946 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.950+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:37 6.18 6:1eac3643:restore::restore.17:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aec3e0 con 0x5578e5ae6500 2026-03-20T11:46:50.947 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.950+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 15 ==== osd_op_reply(37 restore.17 [call] v30'16 uv15 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.947 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.950+0000 7fa4137fe640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:38 6.18 6:1eac3643:restore::restore.17:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3e8004440 con 0x5578e5ae6500 2026-03-20T11:46:50.948 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.951+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 16 ==== osd_op_reply(38 restore.17 [call out=168b] v0'0 uv15 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.948 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.951+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:39 6.1 6:804fdd09:restore::restore.18:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aed750 con 0x7fa3b00071b0 2026-03-20T11:46:50.949 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.952+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 15 ==== osd_op_reply(39 restore.18 [call] v30'21 uv12 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.949 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.952+0000 7fa412ffd640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:40 6.1 6:804fdd09:restore::restore.18:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b0028f80 con 0x7fa3b00071b0 2026-03-20T11:46:50.949 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.952+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 16 ==== osd_op_reply(40 restore.18 [call out=168b] v0'0 uv12 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.949 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.952+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:41 6.e 6:72cf9f9c:restore::restore.19:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aee080 con 0x7fa3b00071b0 2026-03-20T11:46:50.950 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.953+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 17 ==== osd_op_reply(41 restore.19 [call] v30'23 uv16 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.950 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.953+0000 7fa411ffb640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:42 6.e 6:72cf9f9c:restore::restore.19:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b800d120 con 0x7fa3b00071b0 2026-03-20T11:46:50.950 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.953+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 18 ==== osd_op_reply(42 restore.19 [call out=168b] v0'0 uv16 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.950 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.954+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:43 6.7 6:e2f222a4:restore::restore.20:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aed810 con 0x5578e5ae6500 2026-03-20T11:46:50.951 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.954+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 17 ==== osd_op_reply(43 restore.20 [call] v30'21 uv17 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.951 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.954+0000 7fa410ff9640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:44 6.7 6:e2f222a4:restore::restore.20:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3dc008250 con 0x5578e5ae6500 2026-03-20T11:46:50.951 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.954+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 18 ==== osd_op_reply(44 restore.20 [call out=168b] v0'0 uv17 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.951 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.955+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:45 6.19 6:9f54a4c7:restore::restore.21:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aee990 con 0x5578e5ae6500 2026-03-20T11:46:50.952 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.955+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 19 ==== osd_op_reply(45 restore.21 [call] v30'23 uv15 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.952 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.955+0000 7fa4137fe640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:46 6.19 6:9f54a4c7:restore::restore.21:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3e80043a0 con 0x5578e5ae6500 2026-03-20T11:46:50.953 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.956+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 20 ==== osd_op_reply(46 restore.21 [call out=168b] v0'0 uv15 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.953 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.956+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:47 6.18 6:1eddfd8c:restore::restore.22:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aef2a0 con 0x5578e5ae6500 2026-03-20T11:46:50.953 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.956+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 21 ==== osd_op_reply(47 restore.22 [call] v30'17 uv9 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.953 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.956+0000 7fa412ffd640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:48 6.18 6:1eddfd8c:restore::restore.22:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b0028f80 con 0x5578e5ae6500 2026-03-20T11:46:50.954 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.957+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 22 ==== osd_op_reply(48 restore.22 [call out=168b] v0'0 uv9 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.954 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.957+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:49 6.11 6:88da716a:restore::restore.23:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aeea50 con 0x5578e5b1bf50 2026-03-20T11:46:50.955 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.958+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 9 ==== osd_op_reply(49 restore.23 [call] v30'19 uv5 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.955 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.958+0000 7fa411ffb640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:50 6.11 6:88da716a:restore::restore.23:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b800d120 con 0x5578e5b1bf50 2026-03-20T11:46:50.955 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.958+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 10 ==== osd_op_reply(50 restore.23 [call out=168b] v0'0 uv5 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.955 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.958+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:51 6.1b 6:dd126c37:restore::restore.24:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aefbb0 con 0x5578e5ae6500 2026-03-20T11:46:50.956 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.959+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 23 ==== osd_op_reply(51 restore.24 [call] v30'9 uv6 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.956 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.959+0000 7fa410ff9640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:52 6.1b 6:dd126c37:restore::restore.24:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3dc0081e0 con 0x5578e5ae6500 2026-03-20T11:46:50.956 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.959+0000 7fa420e5d640 1 -- 192.168.123.100:0/2191240883 <== osd.0 v2:192.168.123.100:6808/1162726296 24 ==== osd_op_reply(52 restore.24 [call out=168b] v0'0 uv6 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa41802bba0 con 0x5578e5ae6500 2026-03-20T11:46:50.956 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.959+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:53 6.1c 6:3a351582:restore::restore.25:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aef360 con 0x7fa3b00071b0 2026-03-20T11:46:50.957 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.960+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 19 ==== osd_op_reply(53 restore.25 [call] v30'23 uv13 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.957 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.960+0000 7fa4137fe640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:54 6.1c 6:3a351582:restore::restore.25:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3e80043a0 con 0x7fa3b00071b0 2026-03-20T11:46:50.957 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.960+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 20 ==== osd_op_reply(54 restore.25 [call out=168b] v0'0 uv13 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.957 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.960+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:55 6.17 6:e90c9fba:restore::restore.26:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aee140 con 0x7fa3b00071b0 2026-03-20T11:46:50.962 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.965+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 21 ==== osd_op_reply(55 restore.26 [call] v30'10 uv5 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.962 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.965+0000 7fa412ffd640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:56 6.17 6:e90c9fba:restore::restore.26:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b0028f80 con 0x7fa3b00071b0 2026-03-20T11:46:50.962 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.965+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 22 ==== osd_op_reply(56 restore.26 [call out=168b] v0'0 uv5 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.962 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.965+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:57 6.14 6:2c1122a8:restore::restore.27:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5af0120 con 0x5578e5b1bf50 2026-03-20T11:46:50.963 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.966+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 11 ==== osd_op_reply(57 restore.27 [call] v30'21 uv7 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.963 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.966+0000 7fa411ffb640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:58 6.14 6:2c1122a8:restore::restore.27:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b800d120 con 0x5578e5b1bf50 2026-03-20T11:46:50.963 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.966+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 12 ==== osd_op_reply(58 restore.27 [call out=168b] v0'0 uv7 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.963 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.966+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:59 6.1 6:84bbc547:restore::restore.28:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5af0a00 con 0x7fa3b00071b0 2026-03-20T11:46:50.964 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.967+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 23 ==== osd_op_reply(59 restore.28 [call] v30'22 uv6 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.964 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.967+0000 7fa410ff9640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:60 6.1 6:84bbc547:restore::restore.28:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3dc0081e0 con 0x7fa3b00071b0 2026-03-20T11:46:50.964 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.967+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 24 ==== osd_op_reply(60 restore.28 [call out=168b] v0'0 uv6 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.964 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.967+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:61 6.2 6:44311ebf:restore::restore.29:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5aecc90 con 0x7fa3b00071b0 2026-03-20T11:46:50.965 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.968+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 25 ==== osd_op_reply(61 restore.29 [call] v30'25 uv14 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.965 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.968+0000 7fa4137fe640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:62 6.2 6:44311ebf:restore::restore.29:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3e80043a0 con 0x7fa3b00071b0 2026-03-20T11:46:50.965 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.968+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 26 ==== osd_op_reply(62 restore.29 [call out=168b] v0'0 uv14 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.965 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.969+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:63 6.14 6:2df96c99:restore::restore.30:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5af0f70 con 0x5578e5b1bf50 2026-03-20T11:46:50.966 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.969+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 13 ==== osd_op_reply(63 restore.30 [call] v30'22 uv9 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.966 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.969+0000 7fa412ffd640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:64 6.14 6:2df96c99:restore::restore.30:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b0028f80 con 0x5578e5b1bf50 2026-03-20T11:46:50.966 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.970+0000 7fa42165e640 1 -- 192.168.123.100:0/2191240883 <== osd.2 v2:192.168.123.100:6816/2144187382 14 ==== osd_op_reply(64 restore.30 [call out=168b] v0'0 uv9 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5b1bf50 2026-03-20T11:46:50.966 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.970+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:65 6.2 6:4739c10f:restore::restore.31:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5af13c0 con 0x7fa3b00071b0 2026-03-20T11:46:50.967 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.970+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 27 ==== osd_op_reply(65 restore.31 [call] v30'26 uv16 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.967 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.970+0000 7fa411ffb640 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:66 6.2 6:4739c10f:restore::restore.31:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7fa3b800d120 con 0x7fa3b00071b0 2026-03-20T11:46:50.967 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 28 ==== osd_op_reply(66 restore.31 [call out=168b] v0'0 uv16 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 20 init_complete bucket index max shards: 11 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 20 Filter name: none 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa37ffff640 20 reqs_thread_entry: start 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 10 cache get: name=default.rgw.meta+users.uid+testid : miss 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:60 8.13 8:cab2a603:users.uid::testid:head [call version.read in=11b,read 0~0,getxattrs] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b18e20 con 0x5578e5a3f390 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 28 ==== osd_op_reply(60 testid [call,read 0~0,getxattrs] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 234+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 10 cache put: name=default.rgw.meta+users.uid+testid info.flags=0x0 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 10 adding default.rgw.meta+users.uid+testid to cache LRU end 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 10 cache get: name=default.rgw.meta+users.email+tester@ceph.com : miss 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:61 8.15 8:aa735a0a:users.email::tester@ceph.com:head [stat,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b18e20 con 0x5578e5a63ab0 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 18 ==== osd_op_reply(61 tester@ceph.com [stat,read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 201+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 10 cache put: name=default.rgw.meta+users.email+tester@ceph.com info.flags=0x0 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 10 adding default.rgw.meta+users.email+tester@ceph.com to cache LRU end 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 10 cache get: name=default.rgw.meta+users.keys+0555b35654ad1656d804 : miss 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:50.968 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:62 8.11 8:8caedb2a:users.keys::0555b35654ad1656d804:head [stat,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b18e20 con 0x5578e5a63ab0 2026-03-20T11:46:50.969 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.971+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 19 ==== osd_op_reply(62 0555b35654ad1656d804 [stat,read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 206+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.969 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.972+0000 7fa42511f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:50.969 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.972+0000 7fa42511f900 10 cache put: name=default.rgw.meta+users.keys+0555b35654ad1656d804 info.flags=0x0 2026-03-20T11:46:50.969 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.972+0000 7fa42511f900 10 adding default.rgw.meta+users.keys+0555b35654ad1656d804 to cache LRU end 2026-03-20T11:46:50.969 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.972+0000 7fa42511f900 10 cache get: name=default.rgw.meta+users.keys+0555b35654ad1656d804 : hit (negative entry) 2026-03-20T11:46:50.969 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.972+0000 7fa42511f900 10 cache get: name=default.rgw.meta+users.keys+0555b35654ad1656d804 : hit (negative entry) 2026-03-20T11:46:50.969 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.972+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:63 8.13 8:cab2a603:users.uid::testid:head [delete,create,call version.set in=58b,writefull 0~439 in=439b] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5afbee0 con 0x5578e5a3f390 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 29 ==== osd_op_reply(63 testid [delete,create,call,writefull 0~439] v30'1 uv1 ondisk = 0) ==== 276+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa42511f900 10 cache put: name=default.rgw.meta+users.uid+testid info.flags=0x17 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa42511f900 10 moving default.rgw.meta+users.uid+testid to cache LRU end 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa42511f900 10 distributing notification oid=default.rgw.control:notify.3 cni=[op: 0, obj: default.rgw.meta:users.uid:testid, ofs0, ns] 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:64 7.e 7:7759931f:::notify.3:head [notify cookie 93977737931616 in=630b] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5afca90 con 0x5578e5a63ab0 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 20 ==== watch-notify(notify (1) cookie 93977738511424 notify 128849018880 ret 0) ==== 660+0+0 (crc 0 0 0) 0x5578e59e8720 con 0x5578e5a63ab0 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa3ef7fe640 10 rgw watcher librados: RGWWatcher::handle_notify() notify_id 128849018880 cookie 93977738511424 notifier 4221 bl.length()=618 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa3ef7fe640 10 rgw watcher librados: cache put: name=default.rgw.meta+users.uid+testid info.flags=0x17 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa3ef7fe640 10 rgw watcher librados: moving default.rgw.meta+users.uid+testid to cache LRU end 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa3ef7fe640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:65 7.e 7:7759931f:::notify.3:head [notify-ack in=20b] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x7fa3b4003d60 con 0x5578e5a63ab0 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 21 ==== osd_op_reply(64 notify.3 [notify cookie 93977737931616 out=8b] v0'0 uv3 ondisk = 0) ==== 152+0+8 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 22 ==== watch-notify(notify_complete (2) cookie 93977737931616 notify 128849018880 ret 0) ==== 42+0+48 (crc 0 0 0) 0x5578e59e94f0 con 0x5578e5a63ab0 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 23 ==== osd_op_reply(65 notify.3 [notify-ack] v0'0 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.970 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.973+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:66 8.15 8:aa735a0a:users.email::tester@ceph.com:head [delete,create,writefull 0~10 in=10b] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5afc990 con 0x5578e5a63ab0 2026-03-20T11:46:50.971 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 24 ==== osd_op_reply(66 tester@ceph.com [delete,create,writefull 0~10] v30'1 uv1 ondisk = 0) ==== 243+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.971 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa42511f900 10 cache put: name=default.rgw.meta+users.email+tester@ceph.com info.flags=0x7 2026-03-20T11:46:50.971 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa42511f900 10 moving default.rgw.meta+users.email+tester@ceph.com to cache LRU end 2026-03-20T11:46:50.971 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa42511f900 10 distributing notification oid=default.rgw.control:notify.0 cni=[op: 0, obj: default.rgw.meta:users.email:tester@ceph.com, ofs0, ns] 2026-03-20T11:46:50.971 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:67 7.1f 7:f95f44c2:::notify.0:head [notify cookie 93977737931616 in=188b] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5afc9d0 con 0x7fa40c068070 2026-03-20T11:46:50.972 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 16 ==== watch-notify(notify (1) cookie 93977738517536 notify 128849018880 ret 0) ==== 218+0+0 (crc 0 0 0) 0x7fa41803f040 con 0x7fa40c068070 2026-03-20T11:46:50.972 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa3effff640 10 rgw watcher librados: RGWWatcher::handle_notify() notify_id 128849018880 cookie 93977738517536 notifier 4221 bl.length()=176 2026-03-20T11:46:50.972 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa3effff640 10 rgw watcher librados: cache put: name=default.rgw.meta+users.email+tester@ceph.com info.flags=0x7 2026-03-20T11:46:50.972 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa3effff640 10 rgw watcher librados: moving default.rgw.meta+users.email+tester@ceph.com to cache LRU end 2026-03-20T11:46:50.972 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa3effff640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:68 7.1f 7:f95f44c2:::notify.0:head [notify-ack in=20b] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x7fa3dc0081e0 con 0x7fa40c068070 2026-03-20T11:46:50.972 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 17 ==== osd_op_reply(67 notify.0 [notify cookie 93977737931616 out=8b] v0'0 uv3 ondisk = 0) ==== 152+0+8 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.972 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 18 ==== watch-notify(notify_complete (2) cookie 93977737931616 notify 128849018880 ret 0) ==== 42+0+48 (crc 0 0 0) 0x7fa41803f040 con 0x7fa40c068070 2026-03-20T11:46:50.972 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 19 ==== osd_op_reply(68 notify.0 [notify-ack] v0'0 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.972 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.975+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:69 8.11 8:8caedb2a:users.keys::0555b35654ad1656d804:head [delete,create,writefull 0~10 in=10b] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5af8bb0 con 0x5578e5a63ab0 2026-03-20T11:46:50.973 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.976+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 25 ==== osd_op_reply(69 0555b35654ad1656d804 [delete,create,writefull 0~10] v30'1 uv1 ondisk = 0) ==== 248+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.973 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.976+0000 7fa42511f900 10 cache put: name=default.rgw.meta+users.keys+0555b35654ad1656d804 info.flags=0x7 2026-03-20T11:46:50.973 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.976+0000 7fa42511f900 10 moving default.rgw.meta+users.keys+0555b35654ad1656d804 to cache LRU end 2026-03-20T11:46:50.973 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.976+0000 7fa42511f900 10 distributing notification oid=default.rgw.control:notify.7 cni=[op: 0, obj: default.rgw.meta:users.keys:0555b35654ad1656d804, ofs0, ns] 2026-03-20T11:46:50.973 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.976+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:70 7.9 7:93e5b521:::notify.7:head [notify cookie 93977737931616 in=192b] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5af8bb0 con 0x7fa40c068070 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.977+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 20 ==== watch-notify(notify (1) cookie 93977738542256 notify 128849018881 ret 0) ==== 222+0+0 (crc 0 0 0) 0x7fa41803f040 con 0x7fa40c068070 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.977+0000 7fa3ef7fe640 10 rgw watcher librados: RGWWatcher::handle_notify() notify_id 128849018881 cookie 93977738542256 notifier 4221 bl.length()=180 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.977+0000 7fa3ef7fe640 10 rgw watcher librados: cache put: name=default.rgw.meta+users.keys+0555b35654ad1656d804 info.flags=0x7 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.977+0000 7fa3ef7fe640 10 rgw watcher librados: moving default.rgw.meta+users.keys+0555b35654ad1656d804 to cache LRU end 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.977+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 21 ==== osd_op_reply(70 notify.7 [notify cookie 93977737931616 out=8b] v0'0 uv3 ondisk = 0) ==== 152+0+8 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.977+0000 7fa3ef7fe640 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:71 7.9 7:93e5b521:::notify.7:head [notify-ack in=20b] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x7fa3b4003d60 con 0x7fa40c068070 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.977+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 22 ==== watch-notify(notify_complete (2) cookie 93977737931616 notify 128849018881 ret 0) ==== 42+0+48 (crc 0 0 0) 0x7fa418032040 con 0x7fa40c068070 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.977+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 23 ==== osd_op_reply(71 notify.7 [notify-ack] v0'0 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout:{ 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "user_id": "testid", 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "display_name": "M. Tester", 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "email": "tester@ceph.com", 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "suspended": 0, 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "max_buckets": 1000, 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "subusers": [], 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "keys": [ 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: { 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "user": "testid", 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "access_key": "0555b35654ad1656d804", 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "secret_key": "h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==", 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "active": true, 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "create_date": "2026-03-20T11:46:50.972382Z" 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: } 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: ], 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "swift_keys": [], 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "caps": [], 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "op_mask": "read, write, delete", 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "default_placement": "", 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "default_storage_class": "", 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "placement_tags": [], 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "bucket_quota": { 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "enabled": false, 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "check_on_raw": false, 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "max_size": -1, 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "max_size_kb": 0, 2026-03-20T11:46:50.974 INFO:tasks.workunit.client.0.vm00.stdout: "max_objects": -1 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: }, 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "user_quota": { 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "enabled": false, 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "check_on_raw": false, 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "max_size": -1, 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "max_size_kb": 0, 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "max_objects": -1 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: }, 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "temp_url_keys": [], 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "type": "rgw", 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "mfa_ids": [], 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "account_id": "", 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "path": "/", 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "create_date": "2026-03-20T11:46:50.972374Z", 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "tags": [], 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: "group_ids": [] 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout:} 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.977+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:67 6.4 6:22d26bf9:::data_loggenerations_metadata:head [watch unwatch cookie 140341210518576] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5578e5af8bb0 con 0x7fa3b00071b0 2026-03-20T11:46:50.975 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.978+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 <== osd.1 v2:192.168.123.100:6800/3952598619 29 ==== osd_op_reply(67 data_loggenerations_metadata [watch unwatch cookie 140341210518576] v30'25 uv1 ondisk = 0) ==== 172+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x7fa3b00071b0 2026-03-20T11:46:50.976 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.979+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:72 7.1f 7:f95f44c2:::notify.0:head [watch unwatch cookie 93977738517536] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5af8bb0 con 0x7fa40c068070 2026-03-20T11:46:50.976 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.979+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:73 7.0 7:05bf5b68:::notify.1:head [watch unwatch cookie 93977738508368] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5aea5f0 con 0x5578e5a3f390 2026-03-20T11:46:50.976 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.979+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:74 7.15 7:a93a5511:::notify.2:head [watch unwatch cookie 93977738505376] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5b1a220 con 0x5578e5a63ab0 2026-03-20T11:46:50.976 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.979+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:75 7.e 7:7759931f:::notify.3:head [watch unwatch cookie 93977738511424] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e579cfa0 con 0x5578e5a63ab0 2026-03-20T11:46:50.976 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.979+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:76 7.d 7:b4812045:::notify.4:head [watch unwatch cookie 93977738533072] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5afab60 con 0x5578e5a3f390 2026-03-20T11:46:50.976 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.979+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:77 7.3 7:c609908c:::notify.5:head [watch unwatch cookie 93977738537648] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5a37fb0 con 0x7fa40c068070 2026-03-20T11:46:50.976 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.979+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:78 7.14 7:2b04a3e9:::notify.6:head [watch unwatch cookie 93977738520512] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5afaff0 con 0x5578e5a3f390 2026-03-20T11:46:50.976 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.979+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:79 7.9 7:93e5b521:::notify.7:head [watch unwatch cookie 93977738542256] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5578e5afba90 con 0x7fa40c068070 2026-03-20T11:46:50.977 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 24 ==== osd_op_reply(79 notify.7 [watch unwatch cookie 93977738542256] v30'5 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.977 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 25 ==== osd_op_reply(77 notify.5 [watch unwatch cookie 93977738537648] v30'5 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.977 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa420e5d640 1 -- 192.168.123.100:0/3437980365 <== osd.0 v2:192.168.123.100:6808/1162726296 26 ==== osd_op_reply(72 notify.0 [watch unwatch cookie 93977738517536] v30'5 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41802bba0 con 0x7fa40c068070 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa42511f900 20 remove_watcher() i=7 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa42511f900 2 removed watcher, disabling cache 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa42511f900 20 remove_watcher() i=5 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 26 ==== osd_op_reply(75 notify.3 [watch unwatch cookie 93977738511424] v30'5 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa42165e640 1 -- 192.168.123.100:0/3437980365 <== osd.2 v2:192.168.123.100:6816/2144187382 27 ==== osd_op_reply(74 notify.2 [watch unwatch cookie 93977738505376] v30'5 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa41403e020 con 0x5578e5a63ab0 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 30 ==== osd_op_reply(73 notify.1 [watch unwatch cookie 93977738508368] v30'5 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 31 ==== osd_op_reply(76 notify.4 [watch unwatch cookie 93977738533072] v30'5 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 <== osd.1 v2:192.168.123.100:6800/3952598619 32 ==== osd_op_reply(78 notify.6 [watch unwatch cookie 93977738520512] v30'5 uv3 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7fa40c0ab040 con 0x5578e5a3f390 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa42511f900 20 remove_watcher() i=0 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa42511f900 20 remove_watcher() i=3 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa42511f900 20 remove_watcher() i=2 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa42511f900 20 remove_watcher() i=1 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa42511f900 20 remove_watcher() i=4 2026-03-20T11:46:50.978 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.981+0000 7fa42511f900 20 remove_watcher() i=6 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7fa40c068070 msgr2=0x7fa40c072460 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7fa40c068070 0x7fa40c072460 crc :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5578e5a3f390 msgr2=0x5578e5a5f840 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5578e5a3f390 0x5578e5a5f840 crc :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e5a63ab0 msgr2=0x5578e5a83e90 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e5a63ab0 0x5578e5a83e90 crc :-1 s=READY pgs=11 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa3e403c4a0 msgr2=0x7fa3e405c950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa3e403c4a0 0x7fa3e405c950 secure :-1 s=READY pgs=44 cs=0 l=1 rev1=1 crypto rx=0x5578e596ba00 tx=0x7fa41804a000 comp rx=0 tx=0).stop 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5a0ef60 msgr2=0x5578e59e6e40 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 --2- 192.168.123.100:0/3437980365 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5a0ef60 0x5578e59e6e40 secure :-1 s=READY pgs=135 cs=0 l=1 rev1=1 crypto rx=0x7fa41400a2c0 tx=0x7fa414053450 comp rx=0 tx=0).stop 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa413fff640 1 -- 192.168.123.100:0/3437980365 reap_dead start 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 shutdown_connections 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 >> 192.168.123.100:0/3437980365 conn(0x5578e59d3950 msgr2=0x5578e5a0cc50 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 shutdown_connections 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/3437980365 wait complete. 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x5578e5ae6500 msgr2=0x5578e5ae6900 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x5578e5ae6500 0x5578e5ae6900 crc :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7fa3b00071b0 msgr2=0x7fa3b0027660 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7fa3b00071b0 0x7fa3b0027660 crc :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e5b1bf50 msgr2=0x5578e5b19940 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e5b1bf50 0x5578e5b19940 crc :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa3f003c450 msgr2=0x7fa3f005c900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa3f003c450 0x7fa3f005c900 secure :-1 s=READY pgs=43 cs=0 l=1 rev1=1 crypto rx=0x7fa41802beb0 tx=0x7fa418020470 comp rx=0 tx=0).stop 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5983260 msgr2=0x5578e5983630 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.979 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa42511f900 1 --2- 192.168.123.100:0/2191240883 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e5983260 0x5578e5983630 secure :-1 s=READY pgs=133 cs=0 l=1 rev1=1 crypto rx=0x5578e58cceb0 tx=0x7fa414017820 comp rx=0 tx=0).stop 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.982+0000 7fa413fff640 1 -- 192.168.123.100:0/2191240883 reap_dead start 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 shutdown_connections 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 >> 192.168.123.100:0/2191240883 conn(0x5578e596db10 msgr2=0x5578e597ca30 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 shutdown_connections 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 -- 192.168.123.100:0/2191240883 wait complete. 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5578e5799120 msgr2=0x5578e5947d00 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5578e5799120 0x5578e5947d00 crc :-1 s=READY pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e594ad40 msgr2=0x5578e596b120 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e594ad40 0x5578e596b120 crc :-1 s=READY pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa40003c450 msgr2=0x7fa40005c900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa40003c450 0x7fa40005c900 secure :-1 s=READY pgs=42 cs=0 l=1 rev1=1 crypto rx=0x5578e579d7d0 tx=0x7fa40c064000 comp rx=0 tx=0).stop 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 msgr2=0x5578e58cbbe0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 0x5578e58cbbe0 secure :-1 s=READY pgs=132 cs=0 l=1 rev1=1 crypto rx=0x7fa418000c00 tx=0x7fa418000f00 comp rx=0 tx=0).stop 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 shutdown_connections 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5578e594ad40 0x5578e596b120 unknown :-1 s=CLOSED pgs=10 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5578e5799120 0x5578e5947d00 unknown :-1 s=CLOSED pgs=12 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7fa40003c450 0x7fa40005c900 unknown :-1 s=CLOSED pgs=42 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 --2- 192.168.123.100:0/3799469788 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5578e578ae80 0x5578e58cbbe0 unknown :-1 s=CLOSED pgs=132 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 >> 192.168.123.100:0/3799469788 conn(0x5578e57a2a20 msgr2=0x5578e58bef90 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 shutdown_connections 2026-03-20T11:46:50.980 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:50.983+0000 7fa42511f900 1 -- 192.168.123.100:0/3799469788 wait complete. 2026-03-20T11:46:51.009 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.012+0000 7f5cc3b1f900 1 Processor -- start 2026-03-20T11:46:51.009 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.012+0000 7f5cc3b1f900 1 -- start start 2026-03-20T11:46:51.009 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.012+0000 7f5cc3b1f900 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 0x5588763c2040 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.009 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.012+0000 7f5cc3b1f900 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x5588761f41d0 con 0x5588763a2940 2026-03-20T11:46:51.009 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.012+0000 7f5cbaffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 0x5588763c2040 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.009 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.012+0000 7f5cbaffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 0x5588763c2040 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59190/0 (socket says 192.168.123.100:59190) 2026-03-20T11:46:51.009 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.012+0000 7f5cbaffd640 1 -- 192.168.123.100:0/120446133 learned_addr learned my addr 192.168.123.100:0/120446133 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:46:51.009 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.012+0000 7f5cbaffd640 1 -- 192.168.123.100:0/120446133 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x5588763c3300 con 0x5588763a2940 2026-03-20T11:46:51.009 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.013+0000 7f5cbaffd640 1 --2- 192.168.123.100:0/120446133 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 0x5588763c2040 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7f5cac006fa0 tx=0x7f5cac001ae0 comp rx=0 tx=0).ready entity=mon.0 client_cookie=b25a00d61c7483fe server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.010 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.013+0000 7f5cb17fa640 1 -- 192.168.123.100:0/120446133 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5cac004520 con 0x5588763a2940 2026-03-20T11:46:51.010 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.013+0000 7f5cb17fa640 1 -- 192.168.123.100:0/120446133 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f5cac0046c0 con 0x5588763a2940 2026-03-20T11:46:51.010 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.013+0000 7f5cb17fa640 1 -- 192.168.123.100:0/120446133 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5cac0049a0 con 0x5588763a2940 2026-03-20T11:46:51.010 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.013+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/120446133 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 msgr2=0x5588763c2040 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.010 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.013+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/120446133 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 0x5588763c2040 secure :-1 s=READY pgs=138 cs=0 l=1 rev1=1 crypto rx=0x7f5cac006fa0 tx=0x7f5cac001ae0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.010 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.013+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/120446133 shutdown_connections 2026-03-20T11:46:51.010 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.013+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/120446133 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 0x5588763c2040 unknown :-1 s=CLOSED pgs=138 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.010 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.013+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/120446133 >> 192.168.123.100:0/120446133 conn(0x5588762fbc20 msgr2=0x5588763c9420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:46:51.010 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.013+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/120446133 shutdown_connections 2026-03-20T11:46:51.010 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.013+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/120446133 wait complete. 2026-03-20T11:46:51.010 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.014+0000 7f5cc3b1f900 1 Processor -- start 2026-03-20T11:46:51.011 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.014+0000 7f5cc3b1f900 1 -- start start 2026-03-20T11:46:51.011 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.014+0000 7f5cc3b1f900 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 0x558876387dd0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.011 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.014+0000 7f5cc3b1f900 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x5588762fb0a0 con 0x5588763a2940 2026-03-20T11:46:51.011 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.014+0000 7f5cbaffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 0x558876387dd0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.011 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.014+0000 7f5cbaffd640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 0x558876387dd0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59198/0 (socket says 192.168.123.100:59198) 2026-03-20T11:46:51.011 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.014+0000 7f5cbaffd640 1 -- 192.168.123.100:0/3862315907 learned_addr learned my addr 192.168.123.100:0/3862315907 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:46:51.011 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.014+0000 7f5cbaffd640 1 -- 192.168.123.100:0/3862315907 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x558876376110 con 0x5588763a2940 2026-03-20T11:46:51.011 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.014+0000 7f5cbaffd640 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 0x558876387dd0 secure :-1 s=READY pgs=139 cs=0 l=1 rev1=1 crypto rx=0x7f5cac001250 tx=0x7f5cac001280 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.011 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.014+0000 7f5ca37fe640 1 -- 192.168.123.100:0/3862315907 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5cac022020 con 0x5588763a2940 2026-03-20T11:46:51.011 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.014+0000 7f5ca37fe640 1 -- 192.168.123.100:0/3862315907 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f5cac014940 con 0x5588763a2940 2026-03-20T11:46:51.011 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.014+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x558876387850 con 0x5588763a2940 2026-03-20T11:46:51.011 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.015+0000 7f5ca37fe640 1 -- 192.168.123.100:0/3862315907 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5cac014c20 con 0x5588763a2940 2026-03-20T11:46:51.012 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.015+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x558876388520 con 0x5588763a2940 2026-03-20T11:46:51.012 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.015+0000 7f5ca37fe640 1 -- 192.168.123.100:0/3862315907 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f5cac02f020 con 0x5588763a2940 2026-03-20T11:46:51.012 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.016+0000 7f5ca37fe640 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c9c03c450 0x7f5c9c05c900 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.013 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.016+0000 7f5cba7fc640 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c9c03c450 0x7f5c9c05c900 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.013 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.016+0000 7f5ca37fe640 1 -- 192.168.123.100:0/3862315907 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(30..30 src has 1..30) ==== 5434+0+0 (secure 0 0 0) 0x7f5cac018030 con 0x5588763a2940 2026-03-20T11:46:51.013 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.016+0000 7f5cba7fc640 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c9c03c450 0x7f5c9c05c900 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f5ca4002f70 tx=0x7f5ca4064000 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.013 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.016+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5588763c9dc0 0x55887638c7c0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.013 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.016+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:1 5.12 5:49953fa1:::default.realm:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x55887638cd00 con 0x5588763c9dc0 2026-03-20T11:46:51.013 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.016+0000 7f5cbb7fe640 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5588763c9dc0 0x55887638c7c0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.014 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.017+0000 7f5cbb7fe640 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5588763c9dc0 0x55887638c7c0 crc :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.014 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.017+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/3862315907 <== osd.1 v2:192.168.123.100:6800/3952598619 1 ==== osd_op_reply(1 default.realm [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 157+0+0 (crc 0 0 0) 0x7f5cb400c420 con 0x5588763c9dc0 2026-03-20T11:46:51.014 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.017+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:2 5.f 5:f43b6ece:::zone_names.default:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x55887638d0d0 con 0x5588763c9dc0 2026-03-20T11:46:51.014 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.017+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/3862315907 <== osd.1 v2:192.168.123.100:6800/3952598619 2 ==== osd_op_reply(2 zone_names.default [read 0~46 out=46b] v0'0 uv1 ondisk = 0) ==== 162+0+46 (crc 0 0 0) 0x7f5cb400c420 con 0x5588763c9dc0 2026-03-20T11:46:51.014 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.017+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:3 5.1d 5:bd648c13:::zone_info.9ebc77aa-cea4-46bc-ae79-a91c2622665b:head [call version.read in=11b,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x55887638e4d0 con 0x5588763c9dc0 2026-03-20T11:46:51.014 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.018+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/3862315907 <== osd.1 v2:192.168.123.100:6800/3952598619 3 ==== osd_op_reply(3 zone_info.9ebc77aa-cea4-46bc-ae79-a91c2622665b [call out=48b,read 0~1060 out=1060b] v0'0 uv1 ondisk = 0) ==== 232+0+1108 (crc 0 0 0) 0x7f5cb400c420 con 0x5588763c9dc0 2026-03-20T11:46:51.014 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.018+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:4 5.f 5:f4c53578:::zonegroups_names.default:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x55887638d970 con 0x5588763c9dc0 2026-03-20T11:46:51.015 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.018+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/3862315907 <== osd.1 v2:192.168.123.100:6800/3952598619 4 ==== osd_op_reply(4 zonegroups_names.default [read 0~46 out=46b] v0'0 uv2 ondisk = 0) ==== 168+0+46 (crc 0 0 0) 0x7f5cb400c420 con 0x5588763c9dc0 2026-03-20T11:46:51.015 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.018+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x55887638f960 0x55887638fdc0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.015 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.018+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:5 5.17 5:ef670bd1:::zonegroup_info.99e38fc4-7684-4b79-8510-bfe8879a7ba0:head [call version.read in=11b,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x558876390300 con 0x55887638f960 2026-03-20T11:46:51.015 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.018+0000 7f5cbaffd640 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x55887638f960 0x55887638fdc0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.015 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.018+0000 7f5cbaffd640 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x55887638f960 0x55887638fdc0 crc :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.015 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.018+0000 7f5cbaffd640 1 -- 192.168.123.100:0/3862315907 <== osd.2 v2:192.168.123.100:6816/2144187382 1 ==== osd_op_reply(5 zonegroup_info.99e38fc4-7684-4b79-8510-bfe8879a7ba0 [call out=48b,read 0~436 out=436b] v0'0 uv1 ondisk = 0) ==== 237+0+484 (crc 0 0 0) 0x7f5cac01b070 con 0x55887638f960 2026-03-20T11:46:51.016 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.019+0000 7f5cc3b1f900 1 Processor -- start 2026-03-20T11:46:51.016 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.019+0000 7f5cc3b1f900 1 -- start start 2026-03-20T11:46:51.016 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.019+0000 7f5cc3b1f900 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55887639a420 0x55887639a7f0 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.016 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.019+0000 7f5cc3b1f900 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x5588761fe270 con 0x55887639a420 2026-03-20T11:46:51.016 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.019+0000 7f5cbb7fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55887639a420 0x55887639a7f0 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.016 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.019+0000 7f5cbb7fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55887639a420 0x55887639a7f0 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59204/0 (socket says 192.168.123.100:59204) 2026-03-20T11:46:51.016 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.019+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 learned_addr learned my addr 192.168.123.100:0/2133862501 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:46:51.016 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.019+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x55887639ccb0 con 0x55887639a420 2026-03-20T11:46:51.016 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.020+0000 7f5cbb7fe640 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55887639a420 0x55887639a7f0 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x558876375510 tx=0x7f5cb400b780 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.017 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.020+0000 7f5ca0ff9640 1 -- 192.168.123.100:0/2133862501 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5cb401b580 con 0x55887639a420 2026-03-20T11:46:51.017 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.020+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x55887639be70 con 0x55887639a420 2026-03-20T11:46:51.017 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.020+0000 7f5ca0ff9640 1 -- 192.168.123.100:0/2133862501 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f5cb401b740 con 0x55887639a420 2026-03-20T11:46:51.017 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.020+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x55887639ad30 con 0x55887639a420 2026-03-20T11:46:51.017 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.020+0000 7f5ca0ff9640 1 -- 192.168.123.100:0/2133862501 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5cb401ba60 con 0x55887639a420 2026-03-20T11:46:51.018 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.021+0000 7f5ca0ff9640 1 -- 192.168.123.100:0/2133862501 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f5cb401bc00 con 0x55887639a420 2026-03-20T11:46:51.018 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.021+0000 7f5ca0ff9640 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c9003c3b0 0x7f5c9005c860 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.019 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.021+0000 7f5ca0ff9640 1 -- 192.168.123.100:0/2133862501 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(30..30 src has 1..30) ==== 5434+0+0 (secure 0 0 0) 0x7f5cb4055cb0 con 0x55887639a420 2026-03-20T11:46:51.019 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.021+0000 7f5cbaffd640 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c9003c3b0 0x7f5c9005c860 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.019 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.022+0000 7f5cc3b1f900 1 Processor -- start 2026-03-20T11:46:51.019 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.022+0000 7f5cbaffd640 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c9003c3b0 0x7f5c9005c860 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x55887638dd10 tx=0x7f5cac028f50 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.019 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.022+0000 7f5cc3b1f900 1 -- start start 2026-03-20T11:46:51.019 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.022+0000 7f5cc3b1f900 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x558876540030 0x558876533e00 unknown :-1 s=NONE pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.019 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.022+0000 7f5cc3b1f900 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x558876529670 con 0x558876540030 2026-03-20T11:46:51.019 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.022+0000 7f5cbb7fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x558876540030 0x558876533e00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=0 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.019 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.022+0000 7f5cbb7fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x558876540030 0x558876533e00 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=0 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59216/0 (socket says 192.168.123.100:59216) 2026-03-20T11:46:51.019 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.022+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/282954220 learned_addr learned my addr 192.168.123.100:0/282954220 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:46:51.019 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.022+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/282954220 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x558876533af0 con 0x558876540030 2026-03-20T11:46:51.019 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.022+0000 7f5cbb7fe640 1 --2- 192.168.123.100:0/282954220 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x558876540030 0x558876533e00 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7f5cb4016d30 tx=0x7f5cb4016800 comp rx=0 tx=0).ready entity=mon.0 client_cookie=383a0ae4745ad086 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.019 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.023+0000 7f5c9a7fc640 1 -- 192.168.123.100:0/282954220 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5cb404c070 con 0x558876540030 2026-03-20T11:46:51.020 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.023+0000 7f5c9a7fc640 1 -- 192.168.123.100:0/282954220 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f5cb40028b0 con 0x558876540030 2026-03-20T11:46:51.020 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.023+0000 7f5c9a7fc640 1 -- 192.168.123.100:0/282954220 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5cb400a340 con 0x558876540030 2026-03-20T11:46:51.020 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.023+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/282954220 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x558876540030 msgr2=0x558876533e00 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.020 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.023+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/282954220 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x558876540030 0x558876533e00 secure :-1 s=READY pgs=141 cs=0 l=1 rev1=1 crypto rx=0x7f5cb4016d30 tx=0x7f5cb4016800 comp rx=0 tx=0).stop 2026-03-20T11:46:51.020 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.023+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/282954220 shutdown_connections 2026-03-20T11:46:51.020 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.023+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/282954220 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x558876540030 0x558876533e00 unknown :-1 s=CLOSED pgs=141 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.020 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.023+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/282954220 >> 192.168.123.100:0/282954220 conn(0x558876527460 msgr2=0x5588765363f0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:46:51.020 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.023+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/282954220 shutdown_connections 2026-03-20T11:46:51.020 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.023+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/282954220 wait complete. 2026-03-20T11:46:51.020 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.024+0000 7f5cc3b1f900 1 Processor -- start 2026-03-20T11:46:51.021 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.024+0000 7f5cc3b1f900 1 -- start start 2026-03-20T11:46:51.021 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.024+0000 7f5cc3b1f900 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x558876540030 0x558876536d20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.021 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.024+0000 7f5cc3b1f900 1 -- --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_getmap magic: 0 -- 0x5588765283c0 con 0x558876540030 2026-03-20T11:46:51.021 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.024+0000 7f5cbb7fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x558876540030 0x558876536d20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.021 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.024+0000 7f5cbb7fe640 1 --2- >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x558876540030 0x558876536d20 unknown :-1 s=HELLO_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).handle_hello peer v2:192.168.123.100:3300/0 says I am v2:192.168.123.100:59228/0 (socket says 192.168.123.100:59228) 2026-03-20T11:46:51.021 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.024+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 learned_addr learned my addr 192.168.123.100:0/4275789852 (peer_addr_for_me v2:192.168.123.100:0/0) 2026-03-20T11:46:51.021 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.024+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({config=0+,monmap=0+}) -- 0x558876538bd0 con 0x558876540030 2026-03-20T11:46:51.021 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.024+0000 7f5cbb7fe640 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x558876540030 0x558876536d20 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7f5cb4052ec0 tx=0x7f5cb4054260 comp rx=0 tx=0).ready entity=mon.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.021 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.024+0000 7f5c98ff9640 1 -- 192.168.123.100:0/4275789852 <== mon.0 v2:192.168.123.100:3300/0 1 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5cb4032070 con 0x558876540030 2026-03-20T11:46:51.021 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.024+0000 7f5c98ff9640 1 -- 192.168.123.100:0/4275789852 <== mon.0 v2:192.168.123.100:3300/0 2 ==== config(0 keys) ==== 4+0+0 (secure 0 0 0) 0x7f5cb400ac60 con 0x558876540030 2026-03-20T11:46:51.021 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.024+0000 7f5c98ff9640 1 -- 192.168.123.100:0/4275789852 <== mon.0 v2:192.168.123.100:3300/0 3 ==== mon_map magic: 0 ==== 205+0+0 (secure 0 0 0) 0x7f5cb404c040 con 0x558876540030 2026-03-20T11:46:51.021 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.025+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({mgrmap=0+}) -- 0x5588765394a0 con 0x558876540030 2026-03-20T11:46:51.022 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.025+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] -- mon_subscribe({osdmap=0}) -- 0x5588765367d0 con 0x558876540030 2026-03-20T11:46:51.022 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.025+0000 7f5c98ff9640 1 -- 192.168.123.100:0/4275789852 <== mon.0 v2:192.168.123.100:3300/0 4 ==== mgrmap(e 5) ==== 51186+0+0 (secure 0 0 0) 0x7f5cb403e020 con 0x558876540030 2026-03-20T11:46:51.022 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.025+0000 7f5c98ff9640 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c8003c4a0 0x7f5c8005c950 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.022 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.025+0000 7f5c98ff9640 1 -- 192.168.123.100:0/4275789852 <== mon.0 v2:192.168.123.100:3300/0 5 ==== osd_map(30..30 src has 1..30) ==== 5434+0+0 (secure 0 0 0) 0x7f5cb4040ec0 con 0x558876540030 2026-03-20T11:46:51.022 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.025+0000 7f5cbaffd640 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c8003c4a0 0x7f5c8005c950 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.023 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.026+0000 7f5cbaffd640 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c8003c4a0 0x7f5c8005c950 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f5cac01e560 tx=0x7f5cac04b000 comp rx=0 tx=0).ready entity=mgr.4104 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.023 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.026+0000 7f5cc3b1f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:51.023 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.026+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x558876597550 0x5588765b7a00 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.023 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.026+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:1 5.12 5:49953fa1:::default.realm:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588765b80b0 con 0x558876597550 2026-03-20T11:46:51.023 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cba7fc640 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x558876597550 0x5588765b7a00 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cba7fc640 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x558876597550 0x5588765b7a00 crc :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 1 ==== osd_op_reply(1 default.realm [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 157+0+0 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cc3b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cc3b1f900 20 realm 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cc3b1f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:2 5.12 5:49953fa1:::default.realm:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588765b8a40 con 0x558876597550 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 2 ==== osd_op_reply(2 default.realm [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 157+0+0 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cc3b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cc3b1f900 4 RGWPeriod::init failed to init realm id : (2) No such file or directory 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cc3b1f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:3 5.12 5:49953fa1:::default.realm:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588765b9310 con 0x558876597550 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 3 ==== osd_op_reply(3 default.realm [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 157+0+0 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cc3b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cc3b1f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:51.024 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.027+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:4 5.f 5:f43b6ece:::zone_names.default:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588765b96b0 con 0x558876597550 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.028+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 4 ==== osd_op_reply(4 zone_names.default [read 0~46 out=46b] v0'0 uv1 ondisk = 0) ==== 162+0+46 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.028+0000 7f5cc3b1f900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.028+0000 7f5cc3b1f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.028+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:5 5.1d 5:bd648c13:::zone_info.9ebc77aa-cea4-46bc-ae79-a91c2622665b:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588765b9f00 con 0x558876597550 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.028+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 5 ==== osd_op_reply(5 zone_info.9ebc77aa-cea4-46bc-ae79-a91c2622665b [read 0~1060 out=1060b] v0'0 uv1 ondisk = 0) ==== 190+0+1060 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.028+0000 7f5cc3b1f900 20 rados_obj.operate() r=0 bl.length=1060 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.028+0000 7f5cc3b1f900 20 searching for the correct realm 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.028+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5588765bbc70 0x5588765dc050 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.028+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:6 5.0 5:00000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765dc590 con 0x5588765bbc70 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.028+0000 7f5cbb7fe640 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5588765bbc70 0x5588765dc050 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.028+0000 7f5cbb7fe640 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5588765bbc70 0x5588765dc050 crc :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.028+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 1 ==== osd_op_reply(6 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.028+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:7 5.10 5:08000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765dc590 con 0x5588765bbc70 2026-03-20T11:46:51.025 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.029+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 2 ==== osd_op_reply(7 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.026 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.029+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:8 5.8 5:10000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765dc590 con 0x5588765bbc70 2026-03-20T11:46:51.026 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.029+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 3 ==== osd_op_reply(8 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.026 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.029+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:9 5.18 5:18000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765dc590 con 0x558876597550 2026-03-20T11:46:51.026 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.029+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 6 ==== osd_op_reply(9 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.026 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.029+0000 7f5cba7fc640 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7f5ca4068070 0x7f5ca4072460 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.026 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.029+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:10 5.4 5:20000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x7f5ca4068070 2026-03-20T11:46:51.026 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.029+0000 7f5cbaffd640 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7f5ca4068070 0x7f5ca4072460 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.026 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.029+0000 7f5cbaffd640 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7f5ca4068070 0x7f5ca4072460 crc :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.026 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.030+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 1 ==== osd_op_reply(10 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.026 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.030+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:11 5.14 5:28000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x7f5ca4068070 2026-03-20T11:46:51.027 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.030+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 2 ==== osd_op_reply(11 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.027 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.030+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:12 5.c 5:30000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x558876597550 2026-03-20T11:46:51.027 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.030+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 7 ==== osd_op_reply(12 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.027 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.030+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:13 5.1c 5:38000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x5588765bbc70 2026-03-20T11:46:51.027 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.030+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 4 ==== osd_op_reply(13 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.027 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.030+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:14 5.2 5:40000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x7f5ca4068070 2026-03-20T11:46:51.027 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.030+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 3 ==== osd_op_reply(14 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.027 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.030+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:15 5.12 5:48000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x558876597550 2026-03-20T11:46:51.027 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.030+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 8 ==== osd_op_reply(15 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.027 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.030+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:16 5.a 5:50000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x5588765bbc70 2026-03-20T11:46:51.028 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.031+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 5 ==== osd_op_reply(16 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.028 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.031+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:17 5.1a 5:58000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x558876597550 2026-03-20T11:46:51.028 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.031+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 9 ==== osd_op_reply(17 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.028 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.031+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:18 5.6 5:60000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x5588765bbc70 2026-03-20T11:46:51.028 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.031+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 6 ==== osd_op_reply(18 [pgnls start_epoch 30 out=79b] v21'1 uv1 ondisk = 1) ==== 144+0+79 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.028 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.031+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:19 5.16 5:68000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x558876597550 2026-03-20T11:46:51.028 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.031+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 10 ==== osd_op_reply(19 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.028 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.031+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:20 5.e 5:70000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x5588765bbc70 2026-03-20T11:46:51.028 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.032+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 7 ==== osd_op_reply(20 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.029 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.032+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:21 5.1e 5:78000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x7f5ca4068070 2026-03-20T11:46:51.029 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.032+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 4 ==== osd_op_reply(21 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.029 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.032+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:22 5.1 5:80000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x558876597550 2026-03-20T11:46:51.029 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.032+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 11 ==== osd_op_reply(22 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.029 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.032+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:23 5.11 5:88000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x558876597550 2026-03-20T11:46:51.029 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.032+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 12 ==== osd_op_reply(23 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.029 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.032+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:24 5.9 5:90000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x558876597550 2026-03-20T11:46:51.030 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.033+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 13 ==== osd_op_reply(24 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.030 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.033+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:25 5.19 5:98000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x558876597550 2026-03-20T11:46:51.030 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.033+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 14 ==== osd_op_reply(25 [pgnls start_epoch 30 out=74b] v21'1 uv1 ondisk = 1) ==== 144+0+74 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.030 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.033+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:26 5.5 5:a0000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x7f5ca4068070 2026-03-20T11:46:51.031 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.034+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 5 ==== osd_op_reply(26 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.031 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.034+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:27 5.15 5:a8000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x7f5ca4068070 2026-03-20T11:46:51.031 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.034+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 6 ==== osd_op_reply(27 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.031 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.034+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:28 5.d 5:b0000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x5588765bbc70 2026-03-20T11:46:51.031 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.034+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 8 ==== osd_op_reply(28 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.031 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.034+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:29 5.1d 5:b8000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x558876597550 2026-03-20T11:46:51.031 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.034+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 15 ==== osd_op_reply(29 [pgnls start_epoch 30 out=107b] v21'1 uv1 ondisk = 1) ==== 144+0+107 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.031 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.034+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:30 5.3 5:c0000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x7f5ca4068070 2026-03-20T11:46:51.031 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.034+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 7 ==== osd_op_reply(30 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.031 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.034+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:31 5.13 5:c8000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x558876597550 2026-03-20T11:46:51.031 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.034+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 16 ==== osd_op_reply(31 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5ca4068800 con 0x558876597550 2026-03-20T11:46:51.031 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.034+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:32 5.b 5:d0000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x5588765bbc70 2026-03-20T11:46:51.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.035+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 9 ==== osd_op_reply(32 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.035+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:33 5.1b 5:d8000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x5588765b9f00 con 0x5588765bbc70 2026-03-20T11:46:51.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.035+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 10 ==== osd_op_reply(33 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.035+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:34 5.7 5:e0000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x7f5cb4068040 con 0x7f5ca4068070 2026-03-20T11:46:51.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.035+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 8 ==== osd_op_reply(34 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.035+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:35 5.17 5:e8000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x7f5cb4068040 con 0x5588765bbc70 2026-03-20T11:46:51.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.035+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 11 ==== osd_op_reply(35 [pgnls start_epoch 30 out=112b] v21'1 uv1 ondisk = 1) ==== 144+0+112 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.035+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:36 5.f 5:f0000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x7f5cb4068040 con 0x558876597550 2026-03-20T11:46:51.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.035+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 17 ==== osd_op_reply(36 [pgnls start_epoch 30 out=115b] v21'2 uv2 ondisk = 1) ==== 144+0+115 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.032 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.035+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:37 5.1f 5:f8000000::::head [pgnls start_epoch 30 in=39b] snapc 0=[] ondisk+read+ignore_overlay+known_if_redirected+supports_pool_eio e30) -- 0x7f5cb4068040 con 0x5588765bbc70 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 12 ==== osd_op_reply(37 [pgnls start_epoch 30 out=49b] v0'0 uv0 ondisk = 1) ==== 144+0+49 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 RGWRados::pool_iterate: got default.zonegroup. 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 RGWRados::pool_iterate: got default.zone. 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 RGWRados::pool_iterate: got zone_info.9ebc77aa-cea4-46bc-ae79-a91c2622665b 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 RGWRados::pool_iterate: got zonegroup_info.99e38fc4-7684-4b79-8510-bfe8879a7ba0 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 RGWRados::pool_iterate: got zone_names.default 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 RGWRados::pool_iterate: got zonegroups_names.default 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:38 5.12 5:49953fa1:::default.realm:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588765ba750 con 0x558876597550 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 18 ==== osd_op_reply(38 default.realm [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 157+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:39 5.f 5:f4c53578:::zonegroups_names.default:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588765bab40 con 0x558876597550 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 19 ==== osd_op_reply(39 zonegroups_names.default [read 0~46 out=46b] v0'0 uv2 ondisk = 0) ==== 168+0+46 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 rados_obj.operate() r=0 bl.length=46 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:40 5.17 5:ef670bd1:::zonegroup_info.99e38fc4-7684-4b79-8510-bfe8879a7ba0:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588765bab40 con 0x5588765bbc70 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 13 ==== osd_op_reply(40 zonegroup_info.99e38fc4-7684-4b79-8510-bfe8879a7ba0 [read 0~436 out=436b] v0'0 uv1 ondisk = 0) ==== 195+0+436 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 rados_obj.operate() r=0 bl.length=436 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 zone default found 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 4 Realm: () 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 4 ZoneGroup: default (99e38fc4-7684-4b79-8510-bfe8879a7ba0) 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 4 Zone: default (9ebc77aa-cea4-46bc-ae79-a91c2622665b) 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 10 cannot find current period zonegroup using local zonegroup configuration 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 zonegroup default 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:41 5.3 5:c52100b6:::period_config.default:head [read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588765baf30 con 0x7f5ca4068070 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 9 ==== osd_op_reply(41 period_config.default [read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 165+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:51.033 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.036+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:42 6.2 6:4347d321:::bucket.sync-source-hints.:head [call version.read in=11b,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588765de3b0 con 0x558876597550 2026-03-20T11:46:51.034 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 20 ==== osd_op_reply(42 bucket.sync-source-hints. [call,read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 211+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.034 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:51.034 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:51.034 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:43 6.b 6:d467b91b:::bucket.sync-target-hints.:head [call version.read in=11b,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588765de3b0 con 0x558876597550 2026-03-20T11:46:51.034 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 21 ==== osd_op_reply(43 bucket.sync-target-hints. [call,read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 211+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.034 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:51.034 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 20 started sync module instance, tier type = 2026-03-20T11:46:51.034 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 20 started zone id=9ebc77aa-cea4-46bc-ae79-a91c2622665b (name=default) with tier type = 2026-03-20T11:46:51.034 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:44 7.1f 7:f95f44c2:::notify.0:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766df0f0 con 0x7f5ca4068070 2026-03-20T11:46:51.034 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:45 7.0 7:05bf5b68:::notify.1:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766dfce0 con 0x558876597550 2026-03-20T11:46:51.034 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:46 7.15 7:a93a5511:::notify.2:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766e08d0 con 0x5588765bbc70 2026-03-20T11:46:51.035 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:47 7.e 7:7759931f:::notify.3:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766e14c0 con 0x5588765bbc70 2026-03-20T11:46:51.035 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:48 7.d 7:b4812045:::notify.4:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766e1d60 con 0x558876597550 2026-03-20T11:46:51.035 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:49 7.3 7:c609908c:::notify.5:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766e28c0 con 0x7f5ca4068070 2026-03-20T11:46:51.035 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:50 7.14 7:2b04a3e9:::notify.6:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766e34b0 con 0x558876597550 2026-03-20T11:46:51.035 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.037+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:51 7.9 7:93e5b521:::notify.7:head [create] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766e40a0 con 0x7f5ca4068070 2026-03-20T11:46:51.037 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.040+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 10 ==== osd_op_reply(44 notify.0 [create] v30'6 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.037 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.040+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 11 ==== osd_op_reply(49 notify.5 [create] v30'6 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.037 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.040+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 22 ==== osd_op_reply(45 notify.1 [create] v30'6 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.037 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.040+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 23 ==== osd_op_reply(50 notify.6 [create] v30'6 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.037 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.040+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 24 ==== osd_op_reply(48 notify.4 [create] v30'6 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.038 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.040+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:52 7.1f 7:f95f44c2:::notify.0:head [watch watch cookie 94044590842768] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766e08d0 con 0x7f5ca4068070 2026-03-20T11:46:51.038 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.040+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:53 7.3 7:c609908c:::notify.5:head [watch watch cookie 94044590828480] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766e14c0 con 0x7f5ca4068070 2026-03-20T11:46:51.038 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.040+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:54 7.0 7:05bf5b68:::notify.1:head [watch watch cookie 94044590851136] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766e4db0 con 0x558876597550 2026-03-20T11:46:51.038 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.040+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:55 7.14 7:2b04a3e9:::notify.6:head [watch watch cookie 94044590855248] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766e5fa0 con 0x558876597550 2026-03-20T11:46:51.038 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.040+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:56 7.d 7:b4812045:::notify.4:head [watch watch cookie 94044590859808] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766e14c0 con 0x558876597550 2026-03-20T11:46:51.038 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.041+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 12 ==== osd_op_reply(51 notify.7 [create] v30'6 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.039 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.042+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 25 ==== osd_op_reply(54 notify.1 [watch watch cookie 94044590851136] v30'7 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.039 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.042+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:57 7.9 7:93e5b521:::notify.7:head [watch watch cookie 94044590848880] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766e5fa0 con 0x7f5ca4068070 2026-03-20T11:46:51.040 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.043+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 26 ==== osd_op_reply(55 notify.6 [watch watch cookie 94044590855248] v30'7 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.040 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.043+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 27 ==== osd_op_reply(56 notify.4 [watch watch cookie 94044590859808] v30'7 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.040 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.043+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 13 ==== osd_op_reply(53 notify.5 [watch watch cookie 94044590828480] v30'7 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.040 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.043+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 14 ==== osd_op_reply(52 notify.0 [watch watch cookie 94044590842768] v30'7 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.040 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.043+0000 7f5cc3b1f900 20 add_watcher() i=1 2026-03-20T11:46:51.040 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.043+0000 7f5cc3b1f900 20 add_watcher() i=6 2026-03-20T11:46:51.040 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.043+0000 7f5cc3b1f900 20 add_watcher() i=4 2026-03-20T11:46:51.040 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.043+0000 7f5cc3b1f900 20 add_watcher() i=5 2026-03-20T11:46:51.040 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.043+0000 7f5cc3b1f900 20 add_watcher() i=0 2026-03-20T11:46:51.041 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.044+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 15 ==== osd_op_reply(57 notify.7 [watch watch cookie 94044590848880] v30'7 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.041 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.044+0000 7f5cc3b1f900 20 add_watcher() i=7 2026-03-20T11:46:51.041 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.044+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 14 ==== osd_op_reply(47 notify.3 [create] v30'6 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.041 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.045+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 15 ==== osd_op_reply(46 notify.2 [create] v30'6 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.041 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.045+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:58 7.e 7:7759931f:::notify.3:head [watch watch cookie 94044590856672] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766e5f60 con 0x5588765bbc70 2026-03-20T11:46:51.042 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.045+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:59 7.15 7:a93a5511:::notify.2:head [watch watch cookie 94044590834592] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766df790 con 0x5588765bbc70 2026-03-20T11:46:51.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.045+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 16 ==== osd_op_reply(58 notify.3 [watch watch cookie 94044590856672] v30'7 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.046+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 17 ==== osd_op_reply(59 notify.2 [watch watch cookie 94044590834592] v30'7 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.046+0000 7f5cc3b1f900 20 add_watcher() i=3 2026-03-20T11:46:51.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.046+0000 7f5cc3b1f900 20 add_watcher() i=2 2026-03-20T11:46:51.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.046+0000 7f5cc3b1f900 2 all 8 watchers are set, enabling cache 2026-03-20T11:46:51.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.046+0000 7f5cb97fa640 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7f5c500071b0 0x7f5c50027660 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.046+0000 7f5cb97fa640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:1 6.4 6:22d26bf9:::data_loggenerations_metadata:head [call version.check_conds in=74b,call version.read in=11b,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c50028fb0 con 0x7f5c500071b0 2026-03-20T11:46:51.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.046+0000 7f5cba7fc640 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7f5c500071b0 0x7f5c50027660 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.046+0000 7f5cba7fc640 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7f5c500071b0 0x7f5c50027660 crc :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.1 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.046+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 1 ==== osd_op_reply(1 data_loggenerations_metadata [call,call out=48b,read 0~28 out=28b] v0'0 uv1 ondisk = 0) ==== 256+0+76 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.043 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.046+0000 7f5cc1a42640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:2 6.4 6:22d26bf9:::data_loggenerations_metadata:head [watch watch cookie 140034657227824] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x7f5c5c002030 con 0x7f5c500071b0 2026-03-20T11:46:51.044 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.047+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 2 ==== osd_op_reply(2 data_loggenerations_metadata [watch watch cookie 140034657227824] v30'26 uv1 ondisk = 0) ==== 172+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.044 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.047+0000 7f5cc3b1f900 20 rgw_check_secure_mon_conn(): auth registy supported: methods=[2] modes=[2,1] 2026-03-20T11:46:51.044 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.047+0000 7f5cc3b1f900 20 rgw_check_secure_mon_conn(): mode 1 is insecure 2026-03-20T11:46:51.045 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.048+0000 7f5cc3b1f900 5 note: GC not initialized 2026-03-20T11:46:51.045 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.048+0000 7f5c6cfe1640 20 reqs_thread_entry: start 2026-03-20T11:46:51.045 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.048+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:3 6.e 6:74abc724:restore::restore.0:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876690be0 con 0x7f5c500071b0 2026-03-20T11:46:51.046 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.049+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 3 ==== osd_op_reply(3 restore.0 [call] v30'24 uv12 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.046 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.049+0000 7f5cb2ffd640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:4 6.e 6:74abc724:restore::restore.0:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c7c0068d0 con 0x7f5c500071b0 2026-03-20T11:46:51.046 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.049+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 4 ==== osd_op_reply(4 restore.0 [call out=166b] v0'0 uv12 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.046 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.049+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x558876691650 0x558876691a20 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.046 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.049+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:5 6.14 6:293d40bf:restore::restore.1:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876691f60 con 0x558876691650 2026-03-20T11:46:51.046 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.049+0000 7f5cbb7fe640 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x558876691650 0x558876691a20 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.046 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.050+0000 7f5cbb7fe640 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x558876691650 0x558876691a20 crc :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.2 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.047 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.050+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 1 ==== osd_op_reply(5 restore.1 [call] v30'23 uv11 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.047 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.050+0000 7f5cb9ffb640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:6 6.14 6:293d40bf:restore::restore.1:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c8c004460 con 0x558876691650 2026-03-20T11:46:51.048 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.051+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 2 ==== osd_op_reply(6 restore.1 [call out=166b] v0'0 uv11 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.048 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.051+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:7 6.13 6:cc734541:restore::restore.2:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876692350 con 0x558876691650 2026-03-20T11:46:51.048 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.051+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 3 ==== osd_op_reply(7 restore.2 [call] v30'16 uv10 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.048 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.051+0000 7f5cb97fa640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:8 6.13 6:cc734541:restore::restore.2:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c50028f80 con 0x558876691650 2026-03-20T11:46:51.049 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.052+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 4 ==== osd_op_reply(8 restore.2 [call out=166b] v0'0 uv10 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.049 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.052+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x558876690b40 0x558876693360 unknown :-1 s=NONE pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0).connect 2026-03-20T11:46:51.049 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.052+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:9 6.0 6:03a53c4b:restore::restore.3:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5588766938e0 con 0x558876690b40 2026-03-20T11:46:51.049 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.052+0000 7f5cbaffd640 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x558876690b40 0x558876693360 unknown :-1 s=BANNER_CONNECTING pgs=0 cs=0 l=1 rev1=0 crypto rx=0 tx=0 comp rx=0 tx=0)._handle_peer_banner_payload supported=3 required=0 2026-03-20T11:46:51.049 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.052+0000 7f5cbaffd640 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x558876690b40 0x558876693360 crc :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).ready entity=osd.0 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0 2026-03-20T11:46:51.050 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.053+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 1 ==== osd_op_reply(9 restore.3 [call] v30'31 uv17 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.050 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.053+0000 7f5cb3fff640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:10 6.0 6:03a53c4b:restore::restore.3:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c5800bb70 con 0x558876690b40 2026-03-20T11:46:51.050 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.053+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 2 ==== osd_op_reply(10 restore.3 [call out=166b] v0'0 uv17 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.050 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.053+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:11 6.2 6:4485ab68:restore::restore.4:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x55887668f8b0 con 0x7f5c500071b0 2026-03-20T11:46:51.051 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.054+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 5 ==== osd_op_reply(11 restore.4 [call] v30'27 uv18 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.051 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.054+0000 7f5cb2ffd640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:12 6.2 6:4485ab68:restore::restore.4:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c7c007f00 con 0x7f5c500071b0 2026-03-20T11:46:51.051 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.055+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 6 ==== osd_op_reply(12 restore.4 [call out=166b] v0'0 uv18 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.052 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.055+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:13 6.0 6:04e06ead:restore::restore.5:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x55887668f8b0 con 0x558876690b40 2026-03-20T11:46:51.052 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.055+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 3 ==== osd_op_reply(13 restore.5 [call] v30'32 uv19 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.052 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.055+0000 7f5cb9ffb640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:14 6.0 6:04e06ead:restore::restore.5:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c8c004460 con 0x558876690b40 2026-03-20T11:46:51.053 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.056+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 4 ==== osd_op_reply(14 restore.5 [call out=166b] v0'0 uv19 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.053 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.056+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:15 6.19 6:99dcebbc:restore::restore.6:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x55887668f000 con 0x558876690b40 2026-03-20T11:46:51.053 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.056+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 5 ==== osd_op_reply(15 restore.6 [call] v30'24 uv9 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.053 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.057+0000 7f5cb97fa640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:16 6.19 6:99dcebbc:restore::restore.6:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c50028f80 con 0x558876690b40 2026-03-20T11:46:51.054 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.057+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 6 ==== osd_op_reply(16 restore.6 [call out=166b] v0'0 uv9 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.054 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.057+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:17 6.1e 6:7f8df977:restore::restore.7:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x55887668ed50 con 0x7f5c500071b0 2026-03-20T11:46:51.054 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.058+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 7 ==== osd_op_reply(17 restore.7 [call] v30'15 uv5 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.054 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.058+0000 7f5cb3fff640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:18 6.1e 6:7f8df977:restore::restore.7:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c5800d120 con 0x7f5c500071b0 2026-03-20T11:46:51.055 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.058+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 8 ==== osd_op_reply(18 restore.7 [call out=166b] v0'0 uv5 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.055 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.058+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:19 6.e 6:7569ea81:restore::restore.8:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x55887667fe20 con 0x7f5c500071b0 2026-03-20T11:46:51.055 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.059+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 9 ==== osd_op_reply(19 restore.8 [call] v30'25 uv14 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.056 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.059+0000 7f5cb2ffd640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:20 6.e 6:7569ea81:restore::restore.8:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c7c007f00 con 0x7f5c500071b0 2026-03-20T11:46:51.056 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.059+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 10 ==== osd_op_reply(20 restore.8 [call out=166b] v0'0 uv14 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.056 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.059+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:21 6.7 6:e779991c:restore::restore.9:head [call fifo.create_meta in=61b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x55887668f000 con 0x558876690b40 2026-03-20T11:46:51.056 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.059+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 7 ==== osd_op_reply(21 restore.9 [call] v30'22 uv15 ondisk = 0) ==== 153+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.056 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.060+0000 7f5cb9ffb640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:22 6.7 6:e779991c:restore::restore.9:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c8c004440 con 0x558876690b40 2026-03-20T11:46:51.057 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.060+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 8 ==== osd_op_reply(22 restore.9 [call out=166b] v0'0 uv15 ondisk = 0) ==== 153+0+166 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.057 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.060+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:23 6.12 6:4c8eca8b:restore::restore.10:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x55887668f000 con 0x558876690b40 2026-03-20T11:46:51.057 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.060+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 9 ==== osd_op_reply(23 restore.10 [call] v30'14 uv6 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.057 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.060+0000 7f5cb97fa640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:24 6.12 6:4c8eca8b:restore::restore.10:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c50028f80 con 0x558876690b40 2026-03-20T11:46:51.058 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.061+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 10 ==== osd_op_reply(24 restore.10 [call out=168b] v0'0 uv6 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.058 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.061+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:25 6.0 6:01ff4341:restore::restore.11:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876695430 con 0x558876690b40 2026-03-20T11:46:51.059 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.062+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 11 ==== osd_op_reply(25 restore.11 [call] v30'33 uv21 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.059 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.062+0000 7f5cb3fff640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:26 6.0 6:01ff4341:restore::restore.11:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c5800d120 con 0x558876690b40 2026-03-20T11:46:51.059 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.062+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 12 ==== osd_op_reply(26 restore.11 [call out=168b] v0'0 uv21 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.059 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.062+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:27 6.11 6:89a402d8:restore::restore.12:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876695430 con 0x558876691650 2026-03-20T11:46:51.060 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.063+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 5 ==== osd_op_reply(27 restore.12 [call] v30'20 uv11 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.060 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.063+0000 7f5cb2ffd640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:28 6.11 6:89a402d8:restore::restore.12:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c7c007f00 con 0x558876691650 2026-03-20T11:46:51.060 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.063+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 6 ==== osd_op_reply(28 restore.12 [call out=168b] v0'0 uv11 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.060 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.063+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:29 6.5 6:a6ec72c6:restore::restore.13:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876695430 con 0x558876690b40 2026-03-20T11:46:51.061 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.064+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 13 ==== osd_op_reply(29 restore.13 [call] v30'20 uv13 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.062 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.065+0000 7f5cb9ffb640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:30 6.5 6:a6ec72c6:restore::restore.13:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c8c004440 con 0x558876690b40 2026-03-20T11:46:51.062 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.065+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 14 ==== osd_op_reply(30 restore.13 [call out=168b] v0'0 uv13 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.062 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.065+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:31 6.f 6:f5d18734:restore::restore.14:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876694e40 con 0x558876691650 2026-03-20T11:46:51.063 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.066+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 7 ==== osd_op_reply(31 restore.14 [call] v30'20 uv10 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.063 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.066+0000 7f5cb97fa640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:32 6.f 6:f5d18734:restore::restore.14:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c50028f80 con 0x558876691650 2026-03-20T11:46:51.063 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.066+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 8 ==== osd_op_reply(32 restore.14 [call out=168b] v0'0 uv10 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.063 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.066+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:33 6.2 6:476e3e28:restore::restore.15:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876694e40 con 0x7f5c500071b0 2026-03-20T11:46:51.064 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.067+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 11 ==== osd_op_reply(33 restore.15 [call] v30'28 uv20 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.064 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.067+0000 7f5cb3fff640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:34 6.2 6:476e3e28:restore::restore.15:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c5800d5d0 con 0x7f5c500071b0 2026-03-20T11:46:51.064 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.067+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 12 ==== osd_op_reply(34 restore.15 [call out=168b] v0'0 uv20 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.064 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.067+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:35 6.1c 6:3fd0a735:restore::restore.16:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876682540 con 0x7f5c500071b0 2026-03-20T11:46:51.065 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.068+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 13 ==== osd_op_reply(35 restore.16 [call] v30'24 uv17 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.065 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.068+0000 7f5cb2ffd640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:36 6.1c 6:3fd0a735:restore::restore.16:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c7c007f00 con 0x7f5c500071b0 2026-03-20T11:46:51.065 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.068+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 14 ==== osd_op_reply(36 restore.16 [call out=168b] v0'0 uv17 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.065 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.068+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:37 6.18 6:1eac3643:restore::restore.17:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876682970 con 0x558876690b40 2026-03-20T11:46:51.066 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.069+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 15 ==== osd_op_reply(37 restore.17 [call] v30'18 uv15 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.066 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.069+0000 7f5cb9ffb640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:38 6.18 6:1eac3643:restore::restore.17:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c8c004440 con 0x558876690b40 2026-03-20T11:46:51.067 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.070+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 16 ==== osd_op_reply(38 restore.17 [call out=168b] v0'0 uv15 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.067 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.070+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:39 6.1 6:804fdd09:restore::restore.18:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876683270 con 0x7f5c500071b0 2026-03-20T11:46:51.068 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.071+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 15 ==== osd_op_reply(39 restore.18 [call] v30'23 uv12 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.068 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.071+0000 7f5cb97fa640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:40 6.1 6:804fdd09:restore::restore.18:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c50028f80 con 0x7f5c500071b0 2026-03-20T11:46:51.068 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.071+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 16 ==== osd_op_reply(40 restore.18 [call out=168b] v0'0 uv12 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.068 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.071+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:41 6.e 6:72cf9f9c:restore::restore.19:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876683b70 con 0x7f5c500071b0 2026-03-20T11:46:51.069 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.072+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 17 ==== osd_op_reply(41 restore.19 [call] v30'26 uv16 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.069 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.072+0000 7f5cb3fff640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:42 6.e 6:72cf9f9c:restore::restore.19:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c5800d5d0 con 0x7f5c500071b0 2026-03-20T11:46:51.069 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.073+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 18 ==== osd_op_reply(42 restore.19 [call out=168b] v0'0 uv16 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.069 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.073+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:43 6.7 6:e2f222a4:restore::restore.20:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876684470 con 0x558876690b40 2026-03-20T11:46:51.070 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.073+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 17 ==== osd_op_reply(43 restore.20 [call] v30'23 uv17 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.070 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.073+0000 7f5cb2ffd640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:44 6.7 6:e2f222a4:restore::restore.20:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c7c007f00 con 0x558876690b40 2026-03-20T11:46:51.070 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.073+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 18 ==== osd_op_reply(44 restore.20 [call out=168b] v0'0 uv17 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.070 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.073+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:45 6.19 6:9f54a4c7:restore::restore.21:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876682540 con 0x558876690b40 2026-03-20T11:46:51.071 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.074+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 19 ==== osd_op_reply(45 restore.21 [call] v30'25 uv15 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.071 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.074+0000 7f5cb9ffb640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:46 6.19 6:9f54a4c7:restore::restore.21:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c8c0043a0 con 0x558876690b40 2026-03-20T11:46:51.072 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.075+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 20 ==== osd_op_reply(46 restore.21 [call out=168b] v0'0 uv15 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.072 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.075+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:47 6.18 6:1eddfd8c:restore::restore.22:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5588766849e0 con 0x558876690b40 2026-03-20T11:46:51.072 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.075+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 21 ==== osd_op_reply(47 restore.22 [call] v30'19 uv9 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.072 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.075+0000 7f5cb97fa640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:48 6.18 6:1eddfd8c:restore::restore.22:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c50028f80 con 0x558876690b40 2026-03-20T11:46:51.072 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.075+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 22 ==== osd_op_reply(48 restore.22 [call out=168b] v0'0 uv9 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.072 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.076+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:49 6.11 6:88da716a:restore::restore.23:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5588766852e0 con 0x558876691650 2026-03-20T11:46:51.073 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.076+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 9 ==== osd_op_reply(49 restore.23 [call] v30'21 uv5 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.073 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.076+0000 7f5cb3fff640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:50 6.11 6:88da716a:restore::restore.23:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c5800d5d0 con 0x558876691650 2026-03-20T11:46:51.073 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.076+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 10 ==== osd_op_reply(50 restore.23 [call out=168b] v0'0 uv5 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.073 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.077+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:51 6.1b 6:dd126c37:restore::restore.24:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876685bc0 con 0x558876690b40 2026-03-20T11:46:51.075 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.078+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 23 ==== osd_op_reply(51 restore.24 [call] v30'10 uv6 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.075 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.078+0000 7f5cb2ffd640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:52 6.1b 6:dd126c37:restore::restore.24:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c7c007f00 con 0x558876690b40 2026-03-20T11:46:51.075 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.078+0000 7f5cbaffd640 1 -- 192.168.123.100:0/2133862501 <== osd.0 v2:192.168.123.100:6808/1162726296 24 ==== osd_op_reply(52 restore.24 [call out=168b] v0'0 uv6 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5cac03b070 con 0x558876690b40 2026-03-20T11:46:51.075 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.078+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:53 6.1c 6:3a351582:restore::restore.25:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5588766864c0 con 0x7f5c500071b0 2026-03-20T11:46:51.076 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.079+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 19 ==== osd_op_reply(53 restore.25 [call] v30'25 uv13 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.076 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.079+0000 7f5cb9ffb640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:54 6.1c 6:3a351582:restore::restore.25:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c8c0043a0 con 0x7f5c500071b0 2026-03-20T11:46:51.076 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.079+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 20 ==== osd_op_reply(54 restore.25 [call out=168b] v0'0 uv13 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.076 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.079+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:55 6.17 6:e90c9fba:restore::restore.26:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876682540 con 0x7f5c500071b0 2026-03-20T11:46:51.077 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.080+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 21 ==== osd_op_reply(55 restore.26 [call] v30'11 uv5 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.077 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.080+0000 7f5cb97fa640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:56 6.17 6:e90c9fba:restore::restore.26:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c50028f80 con 0x7f5c500071b0 2026-03-20T11:46:51.077 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.080+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 22 ==== osd_op_reply(56 restore.26 [call out=168b] v0'0 uv5 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.077 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.080+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:57 6.14 6:2c1122a8:restore::restore.27:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876686a30 con 0x558876691650 2026-03-20T11:46:51.078 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.081+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 11 ==== osd_op_reply(57 restore.27 [call] v30'24 uv7 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.078 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.081+0000 7f5cb3fff640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:58 6.14 6:2c1122a8:restore::restore.27:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c5800d5d0 con 0x558876691650 2026-03-20T11:46:51.078 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.081+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 12 ==== osd_op_reply(58 restore.27 [call out=168b] v0'0 uv7 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.078 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.082+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:59 6.1 6:84bbc547:restore::restore.28:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876687360 con 0x7f5c500071b0 2026-03-20T11:46:51.079 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.082+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 23 ==== osd_op_reply(59 restore.28 [call] v30'24 uv6 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.079 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.082+0000 7f5cb2ffd640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:60 6.1 6:84bbc547:restore::restore.28:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c7c007f00 con 0x7f5c500071b0 2026-03-20T11:46:51.079 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.082+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 24 ==== osd_op_reply(60 restore.28 [call out=168b] v0'0 uv6 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.079 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.083+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:61 6.2 6:44311ebf:restore::restore.29:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876682540 con 0x7f5c500071b0 2026-03-20T11:46:51.080 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.083+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 25 ==== osd_op_reply(61 restore.29 [call] v30'29 uv14 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.080 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.083+0000 7f5cb9ffb640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:62 6.2 6:44311ebf:restore::restore.29:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c8c0043a0 con 0x7f5c500071b0 2026-03-20T11:46:51.080 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.083+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 26 ==== osd_op_reply(62 restore.29 [call out=168b] v0'0 uv14 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.080 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.084+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:63 6.14 6:2df96c99:restore::restore.30:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x5588766878d0 con 0x558876691650 2026-03-20T11:46:51.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.084+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 13 ==== osd_op_reply(63 restore.30 [call] v30'25 uv9 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.084+0000 7f5cb97fa640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:64 6.14 6:2df96c99:restore::restore.30:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c50028f80 con 0x558876691650 2026-03-20T11:46:51.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.085+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/2133862501 <== osd.2 v2:192.168.123.100:6816/2144187382 14 ==== osd_op_reply(64 restore.30 [call out=168b] v0'0 uv9 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5cb400c420 con 0x558876691650 2026-03-20T11:46:51.081 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.085+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:65 6.2 6:4739c10f:restore::restore.31:head [call fifo.create_meta in=62b] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876686af0 con 0x7f5c500071b0 2026-03-20T11:46:51.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.085+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 27 ==== osd_op_reply(65 restore.31 [call] v30'30 uv16 ondisk = 0) ==== 154+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.085+0000 7f5cb3fff640 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:66 6.2 6:4739c10f:restore::restore.31:head [call fifo.get_meta in=19b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e30) -- 0x7f5c5800d5d0 con 0x7f5c500071b0 2026-03-20T11:46:51.082 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 28 ==== osd_op_reply(66 restore.31 [call out=168b] v0'0 uv16 ondisk = 0) ==== 154+0+168 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 20 init_complete bucket index max shards: 11 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 20 Filter name: none 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5c1e7fc640 20 reqs_thread_entry: start 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 10 cache get: name=default.rgw.meta+users.uid+test : miss 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:60 8.16 8:6a87b59a:users.uid::test:head [call version.read in=11b,read 0~0,getxattrs] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x558876682160 con 0x558876597550 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 28 ==== osd_op_reply(60 test [call,read 0~0,getxattrs] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 232+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 10 cache put: name=default.rgw.meta+users.uid+test info.flags=0x0 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 10 adding default.rgw.meta+users.uid+test to cache LRU end 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 10 cache get: name=default.rgw.meta+users.swift+test:tester : miss 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 20 rados->read ofs=0 len=0 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:61 8.5 8:a1843d29:users.swift::test%3atester:head [stat,read 0~0] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x558876682160 con 0x558876597550 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 29 ==== osd_op_reply(61 test:tester [stat,read 0~0] v0'0 uv0 ondisk = -2 ((2) No such file or directory)) ==== 197+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 20 rados_obj.operate() r=-2 bl.length=0 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 10 cache put: name=default.rgw.meta+users.swift+test:tester info.flags=0x0 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 10 adding default.rgw.meta+users.swift+test:tester to cache LRU end 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 10 cache get: name=default.rgw.meta+users.swift+test:tester : hit (negative entry) 2026-03-20T11:46:51.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.086+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:62 8.16 8:6a87b59a:users.uid::test:head [delete,create,call version.set in=58b,writefull 0~306 in=306b] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x55887668a220 con 0x558876597550 2026-03-20T11:46:51.084 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.087+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 30 ==== osd_op_reply(62 test [delete,create,call,writefull 0~306] v30'1 uv1 ondisk = 0) ==== 274+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.084 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.087+0000 7f5cc3b1f900 10 cache put: name=default.rgw.meta+users.uid+test info.flags=0x17 2026-03-20T11:46:51.084 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.087+0000 7f5cc3b1f900 10 moving default.rgw.meta+users.uid+test to cache LRU end 2026-03-20T11:46:51.084 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.087+0000 7f5cc3b1f900 10 distributing notification oid=default.rgw.control:notify.3 cni=[op: 0, obj: default.rgw.meta:users.uid:test, ofs0, ns] 2026-03-20T11:46:51.085 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.088+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:63 7.e 7:7759931f:::notify.3:head [notify cookie 94044590480752 in=495b] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x558876635e60 con 0x5588765bbc70 2026-03-20T11:46:51.085 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.088+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 18 ==== watch-notify(notify (1) cookie 94044590856672 notify 128849018881 ret 0) ==== 525+0+0 (crc 0 0 0) 0x5588765394a0 con 0x5588765bbc70 2026-03-20T11:46:51.085 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.088+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 19 ==== osd_op_reply(63 notify.3 [notify cookie 94044590480752 out=8b] v0'0 uv6 ondisk = 0) ==== 152+0+8 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.085 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.088+0000 7f5c99ffb640 10 rgw watcher librados: RGWWatcher::handle_notify() notify_id 128849018881 cookie 94044590856672 notifier 4228 bl.length()=483 2026-03-20T11:46:51.085 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.088+0000 7f5c99ffb640 10 rgw watcher librados: cache put: name=default.rgw.meta+users.uid+test info.flags=0x17 2026-03-20T11:46:51.085 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.088+0000 7f5c99ffb640 10 rgw watcher librados: moving default.rgw.meta+users.uid+test to cache LRU end 2026-03-20T11:46:51.085 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.088+0000 7f5c99ffb640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:64 7.e 7:7759931f:::notify.3:head [notify-ack in=20b] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x7f5c54003ac0 con 0x5588765bbc70 2026-03-20T11:46:51.085 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.088+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 20 ==== watch-notify(notify_complete (2) cookie 94044590480752 notify 128849018881 ret 0) ==== 42+0+48 (crc 0 0 0) 0x558876538bd0 con 0x5588765bbc70 2026-03-20T11:46:51.085 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.088+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 21 ==== osd_op_reply(64 notify.3 [notify-ack] v0'0 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cb400c420 con 0x5588765bbc70 2026-03-20T11:46:51.085 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.088+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:65 8.5 8:a1843d29:users.swift::test%3atester:head [delete,create,writefull 0~8 in=8b] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x558876635e00 con 0x558876597550 2026-03-20T11:46:51.086 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.089+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 31 ==== osd_op_reply(65 test:tester [delete,create,writefull 0~8] v30'1 uv1 ondisk = 0) ==== 239+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.086 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.089+0000 7f5cc3b1f900 10 cache put: name=default.rgw.meta+users.swift+test:tester info.flags=0x7 2026-03-20T11:46:51.086 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.089+0000 7f5cc3b1f900 10 moving default.rgw.meta+users.swift+test:tester to cache LRU end 2026-03-20T11:46:51.086 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.089+0000 7f5cc3b1f900 10 distributing notification oid=default.rgw.control:notify.6 cni=[op: 0, obj: default.rgw.meta:users.swift:test:tester, ofs0, ns] 2026-03-20T11:46:51.086 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.089+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:66 7.14 7:2b04a3e9:::notify.6:head [notify cookie 94044590136832 in=182b] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x558876689d00 con 0x558876597550 2026-03-20T11:46:51.086 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.089+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 32 ==== watch-notify(notify (1) cookie 94044590855248 notify 128849018880 ret 0) ==== 212+0+0 (crc 0 0 0) 0x558876681dd0 con 0x558876597550 2026-03-20T11:46:51.086 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.089+0000 7f5c9a7fc640 10 rgw watcher librados: RGWWatcher::handle_notify() notify_id 128849018880 cookie 94044590855248 notifier 4228 bl.length()=170 2026-03-20T11:46:51.086 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.089+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 33 ==== osd_op_reply(66 notify.6 [notify cookie 94044590136832 out=8b] v0'0 uv6 ondisk = 0) ==== 152+0+8 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.086 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.089+0000 7f5c9a7fc640 10 rgw watcher librados: cache put: name=default.rgw.meta+users.swift+test:tester info.flags=0x7 2026-03-20T11:46:51.086 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.089+0000 7f5c9a7fc640 10 rgw watcher librados: moving default.rgw.meta+users.swift+test:tester to cache LRU end 2026-03-20T11:46:51.086 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.089+0000 7f5c9a7fc640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:67 7.14 7:2b04a3e9:::notify.6:head [notify-ack in=20b] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x7f5c7c004620 con 0x558876597550 2026-03-20T11:46:51.086 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.090+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 34 ==== osd_op_reply(67 notify.6 [notify-ack] v0'0 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.087 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.090+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 35 ==== watch-notify(notify_complete (2) cookie 94044590136832 notify 128849018880 ret 0) ==== 42+0+48 (crc 0 0 0) 0x7f5ca40a0040 con 0x558876597550 2026-03-20T11:46:51.087 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.090+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:68 8.16 8:6a87b59a:users.uid::test:head [delete,create,call version.set in=58b,writefull 0~336 in=336b] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x558876689d00 con 0x558876597550 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 36 ==== osd_op_reply(68 test [delete,create,call,writefull 0~336] v30'2 uv2 ondisk = 0) ==== 274+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5cc3b1f900 10 cache put: name=default.rgw.meta+users.uid+test info.flags=0x17 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5cc3b1f900 10 moving default.rgw.meta+users.uid+test to cache LRU end 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5cc3b1f900 10 distributing notification oid=default.rgw.control:notify.3 cni=[op: 0, obj: default.rgw.meta:users.uid:test, ofs0, ns] 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:69 7.e 7:7759931f:::notify.3:head [notify cookie 94044590481984 in=525b] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x558876689d00 con 0x5588765bbc70 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 22 ==== watch-notify(notify (1) cookie 94044590856672 notify 128849018882 ret 0) ==== 555+0+0 (crc 0 0 0) 0x558876538bd0 con 0x5588765bbc70 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 23 ==== osd_op_reply(69 notify.3 [notify cookie 94044590481984 out=8b] v0'0 uv6 ondisk = 0) ==== 152+0+8 (crc 0 0 0) 0x7f5cb4002590 con 0x5588765bbc70 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5c99ffb640 10 rgw watcher librados: RGWWatcher::handle_notify() notify_id 128849018882 cookie 94044590856672 notifier 4228 bl.length()=513 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5c99ffb640 10 rgw watcher librados: cache put: name=default.rgw.meta+users.uid+test info.flags=0x17 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5c99ffb640 10 rgw watcher librados: moving default.rgw.meta+users.uid+test to cache LRU end 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5c99ffb640 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:70 7.e 7:7759931f:::notify.3:head [notify-ack in=20b] snapc 0=[] ondisk+read+known_if_redirected+full_try+supports_pool_eio e30) -- 0x7f5c54003ac0 con 0x5588765bbc70 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 24 ==== watch-notify(notify_complete (2) cookie 94044590481984 notify 128849018882 ret 0) ==== 42+0+48 (crc 0 0 0) 0x558876533af0 con 0x5588765bbc70 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 25 ==== osd_op_reply(70 notify.3 [notify-ack] v0'0 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cb4002590 con 0x5588765bbc70 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stdout:{ 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stdout: "user_id": "test", 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stdout: "display_name": "Tester-Subuser", 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stdout: "email": "", 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stdout: "suspended": 0, 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stdout: "max_buckets": 1000, 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stdout: "subusers": [ 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stdout: { 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stdout: "id": "test:tester", 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stdout: "permissions": "full-control" 2026-03-20T11:46:51.088 INFO:tasks.workunit.client.0.vm00.stdout: } 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: ], 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "keys": [], 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "swift_keys": [ 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: { 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "user": "test:tester", 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "secret_key": "testing", 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "active": true, 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "create_date": "0.000000" 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: } 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: ], 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "caps": [], 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "op_mask": "read, write, delete", 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "default_placement": "", 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "default_storage_class": "", 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "placement_tags": [], 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "bucket_quota": { 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "enabled": false, 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "check_on_raw": false, 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "max_size": -1, 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "max_size_kb": 0, 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "max_objects": -1 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: }, 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "user_quota": { 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "enabled": false, 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "check_on_raw": false, 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "max_size": -1, 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "max_size_kb": 0, 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "max_objects": -1 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: }, 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "temp_url_keys": [], 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "type": "rgw", 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "mfa_ids": [], 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "account_id": "", 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "path": "/", 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "create_date": "2026-03-20T11:46:51.086886Z", 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "tags": [], 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: "group_ids": [] 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout:} 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.091+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:67 6.4 6:22d26bf9:::data_loggenerations_metadata:head [watch unwatch cookie 140034657227824] snapc 0=[] ondisk+write+known_if_redirected+supports_pool_eio e30) -- 0x558876688cf0 con 0x7f5c500071b0 2026-03-20T11:46:51.089 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.092+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 <== osd.1 v2:192.168.123.100:6800/3952598619 29 ==== osd_op_reply(67 data_loggenerations_metadata [watch unwatch cookie 140034657227824] v30'27 uv1 ondisk = 0) ==== 172+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x7f5c500071b0 2026-03-20T11:46:51.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.093+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:71 7.1f 7:f95f44c2:::notify.0:head [watch unwatch cookie 94044590842768] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x558876688cf0 con 0x7f5ca4068070 2026-03-20T11:46:51.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.093+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:72 7.0 7:05bf5b68:::notify.1:head [watch unwatch cookie 94044590851136] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x558876682d50 con 0x558876597550 2026-03-20T11:46:51.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.093+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:73 7.15 7:a93a5511:::notify.2:head [watch unwatch cookie 94044590834592] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x55887668fc00 con 0x5588765bbc70 2026-03-20T11:46:51.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.093+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] -- osd_op(unknown.0.0:74 7.e 7:7759931f:::notify.3:head [watch unwatch cookie 94044590856672] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x558876695810 con 0x5588765bbc70 2026-03-20T11:46:51.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.093+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:75 7.d 7:b4812045:::notify.4:head [watch unwatch cookie 94044590859808] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588763a2f20 con 0x558876597550 2026-03-20T11:46:51.090 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.093+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:76 7.3 7:c609908c:::notify.5:head [watch unwatch cookie 94044590828480] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588762fd850 con 0x7f5ca4068070 2026-03-20T11:46:51.091 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.093+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] -- osd_op(unknown.0.0:77 7.14 7:2b04a3e9:::notify.6:head [watch unwatch cookie 94044590855248] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x5588766828e0 con 0x558876597550 2026-03-20T11:46:51.091 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.094+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 --> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] -- osd_op(unknown.0.0:78 7.9 7:93e5b521:::notify.7:head [watch unwatch cookie 94044590848880] snapc 0=[] ondisk+write+known_if_redirected+full_try+supports_pool_eio e30) -- 0x558876688cf0 con 0x7f5ca4068070 2026-03-20T11:46:51.092 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.095+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 16 ==== osd_op_reply(76 notify.5 [watch unwatch cookie 94044590828480] v30'8 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.092 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.095+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 37 ==== osd_op_reply(75 notify.4 [watch unwatch cookie 94044590859808] v30'8 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.092 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.095+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 38 ==== osd_op_reply(72 notify.1 [watch unwatch cookie 94044590851136] v30'8 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 17 ==== osd_op_reply(71 notify.0 [watch unwatch cookie 94044590842768] v30'8 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 20 remove_watcher() i=5 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 2 removed watcher, disabling cache 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 20 remove_watcher() i=4 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 20 remove_watcher() i=1 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 26 ==== osd_op_reply(73 notify.2 [watch unwatch cookie 94044590834592] v30'8 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cb4002590 con 0x5588765bbc70 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cbb7fe640 1 -- 192.168.123.100:0/4275789852 <== osd.2 v2:192.168.123.100:6816/2144187382 27 ==== osd_op_reply(74 notify.3 [watch unwatch cookie 94044590856672] v30'8 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cb4002590 con 0x5588765bbc70 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 20 remove_watcher() i=0 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 20 remove_watcher() i=2 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cbaffd640 1 -- 192.168.123.100:0/4275789852 <== osd.0 v2:192.168.123.100:6808/1162726296 18 ==== osd_op_reply(78 notify.7 [watch unwatch cookie 94044590848880] v30'8 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5cac03b070 con 0x7f5ca4068070 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 20 remove_watcher() i=3 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 <== osd.1 v2:192.168.123.100:6800/3952598619 39 ==== osd_op_reply(77 notify.6 [watch unwatch cookie 94044590855248] v30'8 uv6 ondisk = 0) ==== 152+0+0 (crc 0 0 0) 0x7f5ca40ab040 con 0x558876597550 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 20 remove_watcher() i=7 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 20 remove_watcher() i=6 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7f5ca4068070 msgr2=0x7f5ca4072460 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x7f5ca4068070 0x7f5ca4072460 crc :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x558876597550 msgr2=0x5588765b7a00 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x558876597550 0x5588765b7a00 crc :-1 s=READY pgs=16 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5588765bbc70 msgr2=0x5588765dc050 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x5588765bbc70 0x5588765dc050 crc :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c8003c4a0 msgr2=0x7f5c8005c950 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c8003c4a0 0x7f5c8005c950 secure :-1 s=READY pgs=47 cs=0 l=1 rev1=1 crypto rx=0x7f5cac01e560 tx=0x7f5cac04b000 comp rx=0 tx=0).stop 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x558876540030 msgr2=0x558876536d20 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/4275789852 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x558876540030 0x558876536d20 secure :-1 s=READY pgs=142 cs=0 l=1 rev1=1 crypto rx=0x7f5cb4052ec0 tx=0x7f5cb4054260 comp rx=0 tx=0).stop 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cba7fc640 1 -- 192.168.123.100:0/4275789852 reap_dead start 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 shutdown_connections 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.096+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 >> 192.168.123.100:0/4275789852 conn(0x558876527460 msgr2=0x5588765348a0 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 shutdown_connections 2026-03-20T11:46:51.093 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/4275789852 wait complete. 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x558876690b40 msgr2=0x558876693360 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6808/1162726296,v1:192.168.123.100:6809/1162726296] conn(0x558876690b40 0x558876693360 crc :-1 s=READY pgs=14 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7f5c500071b0 msgr2=0x7f5c50027660 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x7f5c500071b0 0x7f5c50027660 crc :-1 s=READY pgs=17 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x558876691650 msgr2=0x558876691a20 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x558876691650 0x558876691a20 crc :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c9003c3b0 msgr2=0x7f5c9005c860 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c9003c3b0 0x7f5c9005c860 secure :-1 s=READY pgs=46 cs=0 l=1 rev1=1 crypto rx=0x55887638dd10 tx=0x7f5cac028f50 comp rx=0 tx=0).stop 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55887639a420 msgr2=0x55887639a7f0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/2133862501 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x55887639a420 0x55887639a7f0 secure :-1 s=READY pgs=140 cs=0 l=1 rev1=1 crypto rx=0x558876375510 tx=0x7f5cb400b780 comp rx=0 tx=0).stop 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cba7fc640 1 -- 192.168.123.100:0/2133862501 reap_dead start 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 shutdown_connections 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 >> 192.168.123.100:0/2133862501 conn(0x5588763a1370 msgr2=0x5588763a1740 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 shutdown_connections 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/2133862501 wait complete. 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5588763c9dc0 msgr2=0x55887638c7c0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5588763c9dc0 0x55887638c7c0 crc :-1 s=READY pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x55887638f960 msgr2=0x55887638fdc0 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.097+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x55887638f960 0x55887638fdc0 crc :-1 s=READY pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.098+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c9c03c450 msgr2=0x7f5c9c05c900 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.098+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c9c03c450 0x7f5c9c05c900 secure :-1 s=READY pgs=45 cs=0 l=1 rev1=1 crypto rx=0x7f5ca4002f70 tx=0x7f5ca4064000 comp rx=0 tx=0).stop 2026-03-20T11:46:51.094 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.098+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 msgr2=0x558876387dd0 secure :-1 s=STATE_CONNECTION_ESTABLISHED l=1).mark_down 2026-03-20T11:46:51.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.098+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 0x558876387dd0 secure :-1 s=READY pgs=139 cs=0 l=1 rev1=1 crypto rx=0x7f5cac001250 tx=0x7f5cac001280 comp rx=0 tx=0).stop 2026-03-20T11:46:51.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.098+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 shutdown_connections 2026-03-20T11:46:51.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.098+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6816/2144187382,v1:192.168.123.100:6817/2144187382] conn(0x55887638f960 0x55887638fdc0 unknown :-1 s=CLOSED pgs=13 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.098+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6800/3952598619,v1:192.168.123.100:6801/3952598619] conn(0x5588763c9dc0 0x55887638c7c0 unknown :-1 s=CLOSED pgs=15 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.098+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:6824/1022285047,v1:192.168.123.100:6825/1022285047] conn(0x7f5c9c03c450 0x7f5c9c05c900 unknown :-1 s=CLOSED pgs=45 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.098+0000 7f5cc3b1f900 1 --2- 192.168.123.100:0/3862315907 >> [v2:192.168.123.100:3300/0,v1:192.168.123.100:6789/0] conn(0x5588763a2940 0x558876387dd0 unknown :-1 s=CLOSED pgs=139 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).stop 2026-03-20T11:46:51.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.098+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 >> 192.168.123.100:0/3862315907 conn(0x5588762fbc20 msgr2=0x5588763c9420 unknown :-1 s=STATE_NONE l=0).mark_down 2026-03-20T11:46:51.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.098+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 shutdown_connections 2026-03-20T11:46:51.095 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-20T11:46:51.098+0000 7f5cc3b1f900 1 -- 192.168.123.100:0/3862315907 wait complete. 2026-03-20T11:46:51.116 INFO:tasks.workunit.client.0.vm00.stderr:7+0 records in 2026-03-20T11:46:51.116 INFO:tasks.workunit.client.0.vm00.stderr:7+0 records out 2026-03-20T11:46:51.116 INFO:tasks.workunit.client.0.vm00.stderr:7340032 bytes (7.3 MB, 7.0 MiB) copied, 0.0167089 s, 439 MB/s 2026-03-20T11:46:51.235 INFO:tasks.workunit.client.0.vm00.stderr:51+0 records in 2026-03-20T11:46:51.235 INFO:tasks.workunit.client.0.vm00.stderr:51+0 records out 2026-03-20T11:46:51.236 INFO:tasks.workunit.client.0.vm00.stderr:53477376 bytes (53 MB, 51 MiB) copied, 0.118965 s, 450 MB/s 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: s3cmd version 2.4.0 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: Reading file '/tmp/s3config.60947' 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: host_base->vm00.local 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: access_key->05...17_chars...4 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: secret_key->h7...53_chars...= 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: bucket_location->us-east-1 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: check_ssl_certificate->True 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: check_ssl_hostname->True 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: default_mime_type->binary/octet-stream 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: delete_removed->False 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: dry_run->False 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: enable_multipart->True 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: encoding->UTF-8 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: encrypt->False 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: follow_symlinks->False 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: force->False 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: guess_mime_type->True 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: host_bucket->anything.with.three.dots 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: multipart_chunk_size_mb->15 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: multipart_max_chunks->10000 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: recursive->False 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: recv_chunk->65536 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: send_chunk->65536 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: signature_v2->False 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: socket_timeout->300 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: use_https->False 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: use_mime_magic->True 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: verbosity->WARNING 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config cache_file -> 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config follow_symlinks -> False 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config verbosity -> 10 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Command: mb 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: CreateRequest: resource[uri]=/ 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Using signature v4 2026-03-20T11:46:51.299 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: canonical_headers = host:vm00.local 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114651Z 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Canonical Request: 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:PUT 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:/multipart-bkt/ 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114651Z 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:host;x-amz-content-sha256;x-amz-date 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:---------------------- 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: signature-v4 headers: {'x-amz-date': '20260320T114651Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=7cc1077d6c770d2ca8ada417f0594389b7a676cb2ee8cb6acb09a61a9b1defad', 'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'} 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Processing request, please wait... 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.get(): creating new connection: http://vm00.local 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: non-proxied HTTPConnection(vm00.local, None) 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: format_uri(): /multipart-bkt/ 2026-03-20T11:46:51.300 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Sending request method_string='PUT', uri='/multipart-bkt/', headers={'x-amz-date': '20260320T114651Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=7cc1077d6c770d2ca8ada417f0594389b7a676cb2ee8cb6acb09a61a9b1defad', 'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'}, body=(0 bytes) 2026-03-20T11:46:51.309 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.put(): connection put back to pool (http://vm00.local#1) 2026-03-20T11:46:51.309 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Response: 2026-03-20T11:46:51.309 INFO:tasks.workunit.client.0.vm00.stderr:{'data': b'', 2026-03-20T11:46:51.309 INFO:tasks.workunit.client.0.vm00.stderr: 'headers': {'connection': 'Keep-Alive', 2026-03-20T11:46:51.309 INFO:tasks.workunit.client.0.vm00.stderr: 'content-length': '0', 2026-03-20T11:46:51.309 INFO:tasks.workunit.client.0.vm00.stderr: 'date': 'Fri, 20 Mar 2026 11:46:51 GMT', 2026-03-20T11:46:51.309 INFO:tasks.workunit.client.0.vm00.stderr: 'server': 'Ceph Object Gateway (tentacle)', 2026-03-20T11:46:51.309 INFO:tasks.workunit.client.0.vm00.stderr: 'x-amz-request-id': 'tx000003d2c3263cc907ef3-0069bd33ab-4214-default'}, 2026-03-20T11:46:51.309 INFO:tasks.workunit.client.0.vm00.stderr: 'reason': 'OK', 2026-03-20T11:46:51.309 INFO:tasks.workunit.client.0.vm00.stderr: 'status': 200} 2026-03-20T11:46:51.309 INFO:tasks.workunit.client.0.vm00.stdout:Bucket 's3://multipart-bkt/' created 2026-03-20T11:46:51.383 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: s3cmd version 2.4.0 2026-03-20T11:46:51.383 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: Reading file '/tmp/s3config.60947' 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: host_base->vm00.local 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: access_key->05...17_chars...4 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: secret_key->h7...53_chars...= 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: bucket_location->us-east-1 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: check_ssl_certificate->True 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: check_ssl_hostname->True 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: default_mime_type->binary/octet-stream 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: delete_removed->False 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: dry_run->False 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: enable_multipart->True 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: encoding->UTF-8 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: encrypt->False 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: follow_symlinks->False 2026-03-20T11:46:51.384 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: force->False 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: guess_mime_type->True 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: host_bucket->anything.with.three.dots 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: multipart_chunk_size_mb->15 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: multipart_max_chunks->10000 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: recursive->False 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: recv_chunk->65536 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: send_chunk->65536 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: signature_v2->False 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: socket_timeout->300 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: use_https->False 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: use_mime_magic->True 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: verbosity->WARNING 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config cache_file -> 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config follow_symlinks -> False 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config verbosity -> 10 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Command: put 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:INFO: Cache file not found or empty, creating/populating it. 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: DeUnicodising '/tmp/huge_obj.temp.60947' using UTF-8 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:INFO: Compiling list of local files... 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: DeUnicodising '/tmp/huge_obj.temp.60947' using UTF-8 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Unicodising b'huge_obj.temp.60947' using UTF-8 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: DeUnicodising '/tmp/huge_obj.temp.60947' using UTF-8 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: DeUnicodising '/tmp/huge_obj.temp.60947' using UTF-8 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Unicodising b'/tmp' using UTF-8 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: DeUnicodising '/tmp/huge_obj.temp.60947' using UTF-8 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Unicodising b'huge_obj.temp.60947' using UTF-8 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: DeUnicodising '/tmp/huge_obj.temp.60947' using UTF-8 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: DeUnicodising '/tmp/huge_obj.temp.60947' using UTF-8 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Applying --exclude/--include 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: CHECK: 'huge_obj.temp.60947' 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: PASS: 'huge_obj.temp.60947' 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:INFO: Running stat() and reading/calculating MD5 values on 1 files, this may take some time... 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: DeUnicodising '/tmp/huge_obj.temp.60947' using UTF-8 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: doing file I/O to read md5 of huge_obj.temp.60947 2026-03-20T11:46:51.385 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: DeUnicodising '/tmp/huge_obj.temp.60947' using UTF-8 2026-03-20T11:46:51.439 INFO:tasks.workunit.client.0.vm00.stderr:INFO: Summary: 1 local files to upload 2026-03-20T11:46:51.439 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: String 'b'ubuntu'' encoded to 'ubuntu' 2026-03-20T11:46:51.440 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: String 'b'ubuntu'' encoded to 'ubuntu' 2026-03-20T11:46:51.440 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: attr_header: {'x-amz-meta-s3cmd-attrs': 'atime:1774007211/ctime:1774007211/gid:1000/gname:ubuntu/md5:5dcdf4bbac3a98ff8fc76ef3a8426b0c/mode:33188/mtime:1774007211/uid:1000/uname:ubuntu'} 2026-03-20T11:46:51.440 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: DeUnicodising '/tmp/huge_obj.temp.60947' using UTF-8 2026-03-20T11:46:51.443 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: CreateRequest: resource[uri]=/multipart-obj 2026-03-20T11:46:51.443 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Using signature v4 2026-03-20T11:46:51.443 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: canonical_headers = content-type:application/octet-stream 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114651Z 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-meta-s3cmd-attrs:atime:1774007211/ctime:1774007211/gid:1000/gname:ubuntu/md5:5dcdf4bbac3a98ff8fc76ef3a8426b0c/mode:33188/mtime:1774007211/uid:1000/uname:ubuntu 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-storage-class:STANDARD 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Canonical Request: 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:POST 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:/multipart-bkt/multipart-obj 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:uploads= 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:content-type:application/octet-stream 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114651Z 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-meta-s3cmd-attrs:atime:1774007211/ctime:1774007211/gid:1000/gname:ubuntu/md5:5dcdf4bbac3a98ff8fc76ef3a8426b0c/mode:33188/mtime:1774007211/uid:1000/uname:ubuntu 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-storage-class:STANDARD 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:---------------------- 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: signature-v4 headers: {'content-type': 'application/octet-stream', 'x-amz-meta-s3cmd-attrs': 'atime:1774007211/ctime:1774007211/gid:1000/gname:ubuntu/md5:5dcdf4bbac3a98ff8fc76ef3a8426b0c/mode:33188/mtime:1774007211/uid:1000/uname:ubuntu', 'x-amz-storage-class': 'STANDARD', 'x-amz-date': '20260320T114651Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=32d1fbafad533b81c806732b8134ceb4dc7a0094ac81c4715aa7f29c1c32897a', 'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'} 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Processing request, please wait... 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.get(): creating new connection: http://vm00.local 2026-03-20T11:46:51.444 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: non-proxied HTTPConnection(vm00.local, None) 2026-03-20T11:46:51.445 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: format_uri(): /multipart-bkt/multipart-obj?uploads 2026-03-20T11:46:51.445 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Sending request method_string='POST', uri='/multipart-bkt/multipart-obj?uploads', headers={'content-type': 'application/octet-stream', 'x-amz-meta-s3cmd-attrs': 'atime:1774007211/ctime:1774007211/gid:1000/gname:ubuntu/md5:5dcdf4bbac3a98ff8fc76ef3a8426b0c/mode:33188/mtime:1774007211/uid:1000/uname:ubuntu', 'x-amz-storage-class': 'STANDARD', 'x-amz-date': '20260320T114651Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-s3cmd-attrs;x-amz-storage-class,Signature=32d1fbafad533b81c806732b8134ceb4dc7a0094ac81c4715aa7f29c1c32897a', 'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'}, body=(0 bytes) 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.put(): connection put back to pool (http://vm00.local#1) 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Response: 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr:{'data': b'multipart-b' 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr: b'ktmultipart-obj2~srrCoDeqIR2yyij5eh9hG' 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr: b'j9hYqhfEZw', 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr: 'headers': {'connection': 'Keep-Alive', 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr: 'content-length': '257', 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr: 'content-type': 'application/xml', 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr: 'date': 'Fri, 20 Mar 2026 11:46:52 GMT', 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr: 'server': 'Ceph Object Gateway (tentacle)', 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr: 'x-amz-request-id': 'tx00000371b4ed9673ed331-0069bd33ab-4214-default'}, 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr: 'reason': 'OK', 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr: 'status': 200} 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: MultiPart: Uploading /tmp/huge_obj.temp.60947 in 4 parts 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Uploading part 1 of '2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw' (15728640 bytes) 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: CreateRequest: resource[uri]=/multipart-obj 2026-03-20T11:46:52.787 INFO:tasks.workunit.client.0.vm00.stderr:INFO: Sending file '/tmp/huge_obj.temp.60947', please wait... 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Using signature v4 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: canonical_headers = content-length:15728640 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:a5217bb8706fced9cd33ed94d17bdba9bb1a92aa5d2e538e74ee9ced56ab553d 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114652Z 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Canonical Request: 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:PUT 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:/multipart-bkt/multipart-obj 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:partNumber=1&uploadId=2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:content-length:15728640 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:a5217bb8706fced9cd33ed94d17bdba9bb1a92aa5d2e538e74ee9ced56ab553d 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114652Z 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:content-length;host;x-amz-content-sha256;x-amz-date 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:a5217bb8706fced9cd33ed94d17bdba9bb1a92aa5d2e538e74ee9ced56ab553d 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:---------------------- 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: signature-v4 headers: {'content-length': '15728640', 'x-amz-date': '20260320T114652Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=content-length;host;x-amz-content-sha256;x-amz-date,Signature=3f7916ca71d666d9faba8ddfedf4f5473641cf988bc12fba5ad8267f938b60dd', 'x-amz-content-sha256': 'a5217bb8706fced9cd33ed94d17bdba9bb1a92aa5d2e538e74ee9ced56ab553d'} 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.get(): re-using connection: http://vm00.local#1 2026-03-20T11:46:52.798 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: format_uri(): /multipart-bkt/multipart-obj?partNumber=1&uploadId=2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw 2026-03-20T11:46:52.874 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.put(): connection put back to pool (http://vm00.local#2) 2026-03-20T11:46:52.874 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Response: 2026-03-20T11:46:52.874 INFO:tasks.workunit.client.0.vm00.stderr:{'data': b'', 2026-03-20T11:46:52.874 INFO:tasks.workunit.client.0.vm00.stderr: 'headers': {'accept-ranges': 'bytes', 2026-03-20T11:46:52.874 INFO:tasks.workunit.client.0.vm00.stderr: 'connection': 'Keep-Alive', 2026-03-20T11:46:52.874 INFO:tasks.workunit.client.0.vm00.stderr: 'content-length': '0', 2026-03-20T11:46:52.874 INFO:tasks.workunit.client.0.vm00.stderr: 'date': 'Fri, 20 Mar 2026 11:46:52 GMT', 2026-03-20T11:46:52.874 INFO:tasks.workunit.client.0.vm00.stderr: 'etag': '"a2c2fc5be9e1c1af74973af566b6644f"', 2026-03-20T11:46:52.874 INFO:tasks.workunit.client.0.vm00.stderr: 'server': 'Ceph Object Gateway (tentacle)', 2026-03-20T11:46:52.874 INFO:tasks.workunit.client.0.vm00.stderr: 'x-amz-request-id': 'tx00000ee3f240f2db7dfd9-0069bd33ac-4214-default'}, 2026-03-20T11:46:52.874 INFO:tasks.workunit.client.0.vm00.stderr: 'reason': 'OK', 2026-03-20T11:46:52.874 INFO:tasks.workunit.client.0.vm00.stderr: 'size': 15728640, 2026-03-20T11:46:52.874 INFO:tasks.workunit.client.0.vm00.stderr: 'status': 200} 2026-03-20T11:46:52.875 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: MD5 sums: computed=a2c2fc5be9e1c1af74973af566b6644f, received=a2c2fc5be9e1c1af74973af566b6644f 2026-03-20T11:46:52.875 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Uploading part 2 of '2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw' (15728640 bytes) 2026-03-20T11:46:52.875 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: CreateRequest: resource[uri]=/multipart-obj 2026-03-20T11:46:52.875 INFO:tasks.workunit.client.0.vm00.stderr:INFO: Sending file '/tmp/huge_obj.temp.60947', please wait... 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Using signature v4 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: canonical_headers = content-length:15728640 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:68aa7bf6248d62eeee56b0d296fdbfde1b4b350a6f61595f6d48c76f64a51b7d 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114652Z 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Canonical Request: 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:PUT 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:/multipart-bkt/multipart-obj 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:partNumber=2&uploadId=2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:content-length:15728640 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:68aa7bf6248d62eeee56b0d296fdbfde1b4b350a6f61595f6d48c76f64a51b7d 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114652Z 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:content-length;host;x-amz-content-sha256;x-amz-date 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:68aa7bf6248d62eeee56b0d296fdbfde1b4b350a6f61595f6d48c76f64a51b7d 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:---------------------- 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: signature-v4 headers: {'content-length': '15728640', 'x-amz-date': '20260320T114652Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=content-length;host;x-amz-content-sha256;x-amz-date,Signature=f73868440496c574f3c50a83ead33b57b80f284803f8eb0695034c2b5b9a18b3', 'x-amz-content-sha256': '68aa7bf6248d62eeee56b0d296fdbfde1b4b350a6f61595f6d48c76f64a51b7d'} 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.get(): re-using connection: http://vm00.local#2 2026-03-20T11:46:52.885 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: format_uri(): /multipart-bkt/multipart-obj?partNumber=2&uploadId=2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw 2026-03-20T11:46:52.951 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.put(): connection put back to pool (http://vm00.local#3) 2026-03-20T11:46:52.951 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Response: 2026-03-20T11:46:52.951 INFO:tasks.workunit.client.0.vm00.stderr:{'data': b'', 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr: 'headers': {'accept-ranges': 'bytes', 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr: 'connection': 'Keep-Alive', 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr: 'content-length': '0', 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr: 'date': 'Fri, 20 Mar 2026 11:46:52 GMT', 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr: 'etag': '"93ad42ccaf7a043cc77208401a79fb1c"', 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr: 'server': 'Ceph Object Gateway (tentacle)', 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr: 'x-amz-request-id': 'tx000007db795a048a42b17-0069bd33ac-4214-default'}, 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr: 'reason': 'OK', 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr: 'size': 15728640, 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr: 'status': 200} 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: MD5 sums: computed=93ad42ccaf7a043cc77208401a79fb1c, received=93ad42ccaf7a043cc77208401a79fb1c 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Uploading part 3 of '2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw' (15728640 bytes) 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: CreateRequest: resource[uri]=/multipart-obj 2026-03-20T11:46:52.952 INFO:tasks.workunit.client.0.vm00.stderr:INFO: Sending file '/tmp/huge_obj.temp.60947', please wait... 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Using signature v4 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: canonical_headers = content-length:15728640 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:8480aca859e15540de247aff72d78864d8afdec822fe57a535dd1ca71a0a5afc 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114652Z 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Canonical Request: 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:PUT 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:/multipart-bkt/multipart-obj 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:partNumber=3&uploadId=2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:content-length:15728640 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:8480aca859e15540de247aff72d78864d8afdec822fe57a535dd1ca71a0a5afc 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114652Z 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:content-length;host;x-amz-content-sha256;x-amz-date 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:8480aca859e15540de247aff72d78864d8afdec822fe57a535dd1ca71a0a5afc 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:---------------------- 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: signature-v4 headers: {'content-length': '15728640', 'x-amz-date': '20260320T114652Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=content-length;host;x-amz-content-sha256;x-amz-date,Signature=fd07219f65172a3f3ed748733a8def049183a3ce09faa8d1172778ac2f4098ff', 'x-amz-content-sha256': '8480aca859e15540de247aff72d78864d8afdec822fe57a535dd1ca71a0a5afc'} 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.get(): re-using connection: http://vm00.local#3 2026-03-20T11:46:52.961 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: format_uri(): /multipart-bkt/multipart-obj?partNumber=3&uploadId=2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.put(): connection put back to pool (http://vm00.local#4) 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Response: 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr:{'data': b'', 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr: 'headers': {'accept-ranges': 'bytes', 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr: 'connection': 'Keep-Alive', 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr: 'content-length': '0', 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr: 'date': 'Fri, 20 Mar 2026 11:46:53 GMT', 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr: 'etag': '"fea2b7319fae13a1534abb7ab87f4398"', 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr: 'server': 'Ceph Object Gateway (tentacle)', 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr: 'x-amz-request-id': 'tx00000888d0b88a5ae8690-0069bd33ac-4214-default'}, 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr: 'reason': 'OK', 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr: 'size': 15728640, 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr: 'status': 200} 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: MD5 sums: computed=fea2b7319fae13a1534abb7ab87f4398, received=fea2b7319fae13a1534abb7ab87f4398 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Uploading part 4 of '2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw' (6291456 bytes) 2026-03-20T11:46:53.029 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: CreateRequest: resource[uri]=/multipart-obj 2026-03-20T11:46:53.030 INFO:tasks.workunit.client.0.vm00.stderr:INFO: Sending file '/tmp/huge_obj.temp.60947', please wait... 2026-03-20T11:46:53.034 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Using signature v4 2026-03-20T11:46:53.034 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:53.034 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: canonical_headers = content-length:6291456 2026-03-20T11:46:53.034 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:53.034 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:88e22411309bc60d3dd8f1de6461dde132f5a64358d5eac8771dfc0af912a730 2026-03-20T11:46:53.034 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114653Z 2026-03-20T11:46:53.034 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:53.034 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Canonical Request: 2026-03-20T11:46:53.034 INFO:tasks.workunit.client.0.vm00.stderr:PUT 2026-03-20T11:46:53.034 INFO:tasks.workunit.client.0.vm00.stderr:/multipart-bkt/multipart-obj 2026-03-20T11:46:53.035 INFO:tasks.workunit.client.0.vm00.stderr:partNumber=4&uploadId=2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw 2026-03-20T11:46:53.035 INFO:tasks.workunit.client.0.vm00.stderr:content-length:6291456 2026-03-20T11:46:53.035 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:53.035 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:88e22411309bc60d3dd8f1de6461dde132f5a64358d5eac8771dfc0af912a730 2026-03-20T11:46:53.035 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114653Z 2026-03-20T11:46:53.035 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:53.035 INFO:tasks.workunit.client.0.vm00.stderr:content-length;host;x-amz-content-sha256;x-amz-date 2026-03-20T11:46:53.035 INFO:tasks.workunit.client.0.vm00.stderr:88e22411309bc60d3dd8f1de6461dde132f5a64358d5eac8771dfc0af912a730 2026-03-20T11:46:53.035 INFO:tasks.workunit.client.0.vm00.stderr:---------------------- 2026-03-20T11:46:53.035 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: signature-v4 headers: {'content-length': '6291456', 'x-amz-date': '20260320T114653Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=content-length;host;x-amz-content-sha256;x-amz-date,Signature=771b5fe40792e7ca41d348eaba458c7a8509e1f9fea39b253f1a9a0e825d9ecb', 'x-amz-content-sha256': '88e22411309bc60d3dd8f1de6461dde132f5a64358d5eac8771dfc0af912a730'} 2026-03-20T11:46:53.035 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:53.035 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.get(): re-using connection: http://vm00.local#4 2026-03-20T11:46:53.035 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: format_uri(): /multipart-bkt/multipart-obj?partNumber=4&uploadId=2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.put(): connection put back to pool (http://vm00.local#5) 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Response: 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr:{'data': b'', 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr: 'headers': {'accept-ranges': 'bytes', 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr: 'connection': 'Keep-Alive', 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr: 'content-length': '0', 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr: 'date': 'Fri, 20 Mar 2026 11:46:53 GMT', 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr: 'etag': '"46745ff6fd975e4cc3ba8f54db05fbc9"', 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr: 'server': 'Ceph Object Gateway (tentacle)', 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr: 'x-amz-request-id': 'tx0000005c483c2ba3ae549-0069bd33ad-4214-default'}, 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr: 'reason': 'OK', 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr: 'size': 6291456, 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr: 'status': 200} 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: MD5 sums: computed=46745ff6fd975e4cc3ba8f54db05fbc9, received=46745ff6fd975e4cc3ba8f54db05fbc9 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: MultiPart: Upload finished: 4 parts 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: MultiPart: Completing upload: 2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: CreateRequest: resource[uri]=/multipart-obj 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Using signature v4 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: canonical_headers = content-length:387 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:6d4e5840db9d7b71af587d190e24673b3817badaa471e5d2dd5c3326b2338878 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114653Z 2026-03-20T11:46:53.078 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Canonical Request: 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:POST 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:/multipart-bkt/multipart-obj 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:uploadId=2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:content-length:387 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:6d4e5840db9d7b71af587d190e24673b3817badaa471e5d2dd5c3326b2338878 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114653Z 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:content-length;host;x-amz-content-sha256;x-amz-date 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:6d4e5840db9d7b71af587d190e24673b3817badaa471e5d2dd5c3326b2338878 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:---------------------- 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: signature-v4 headers: {'content-length': '387', 'x-amz-date': '20260320T114653Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=content-length;host;x-amz-content-sha256;x-amz-date,Signature=2537fe41b806f456c78530a94eb2480880365110981fa7031f240df51f489465', 'x-amz-content-sha256': '6d4e5840db9d7b71af587d190e24673b3817badaa471e5d2dd5c3326b2338878'} 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Processing request, please wait... 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.get(): re-using connection: http://vm00.local#5 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: format_uri(): /multipart-bkt/multipart-obj?uploadId=2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw 2026-03-20T11:46:53.079 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Sending request method_string='POST', uri='/multipart-bkt/multipart-obj?uploadId=2~srrCoDeqIR2yyij5eh9hGj9hYqhfEZw', headers={'content-length': '387', 'x-amz-date': '20260320T114653Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=content-length;host;x-amz-content-sha256;x-amz-date,Signature=2537fe41b806f456c78530a94eb2480880365110981fa7031f240df51f489465', 'x-amz-content-sha256': '6d4e5840db9d7b71af587d190e24673b3817badaa471e5d2dd5c3326b2338878'}, body=(387 bytes) 2026-03-20T11:46:53.127 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.put(): connection put back to pool (http://vm00.local#6) 2026-03-20T11:46:53.127 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Response: 2026-03-20T11:46:53.127 INFO:tasks.workunit.client.0.vm00.stderr:{'data': b'vm00.loca' 2026-03-20T11:46:53.127 INFO:tasks.workunit.client.0.vm00.stderr: b'l/multipart-bkt/multipart-objmultipart-bktmultipart-obj"a2cbe6f639f1af7e093851495b6add' 2026-03-20T11:46:53.127 INFO:tasks.workunit.client.0.vm00.stderr: b'34-4"', 2026-03-20T11:46:53.127 INFO:tasks.workunit.client.0.vm00.stderr: 'headers': {'connection': 'Keep-Alive', 2026-03-20T11:46:53.127 INFO:tasks.workunit.client.0.vm00.stderr: 'content-length': '321', 2026-03-20T11:46:53.127 INFO:tasks.workunit.client.0.vm00.stderr: 'content-type': 'application/xml', 2026-03-20T11:46:53.127 INFO:tasks.workunit.client.0.vm00.stderr: 'date': 'Fri, 20 Mar 2026 11:46:53 GMT', 2026-03-20T11:46:53.127 INFO:tasks.workunit.client.0.vm00.stderr: 'server': 'Ceph Object Gateway (tentacle)', 2026-03-20T11:46:53.127 INFO:tasks.workunit.client.0.vm00.stderr: 'x-amz-request-id': 'tx00000b2dc2a735d60248a-0069bd33ad-4214-default'}, 2026-03-20T11:46:53.127 INFO:tasks.workunit.client.0.vm00.stderr: 'reason': 'OK', 2026-03-20T11:46:53.127 INFO:tasks.workunit.client.0.vm00.stderr: 'status': 200} 2026-03-20T11:46:53.198 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: s3cmd version 2.4.0 2026-03-20T11:46:53.198 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: Reading file '/tmp/s3config.60947' 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: host_base->vm00.local 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: access_key->05...17_chars...4 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: secret_key->h7...53_chars...= 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: bucket_location->us-east-1 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: check_ssl_certificate->True 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: check_ssl_hostname->True 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: default_mime_type->binary/octet-stream 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: delete_removed->False 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: dry_run->False 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: enable_multipart->True 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: encoding->UTF-8 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: encrypt->False 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: follow_symlinks->False 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: force->False 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: guess_mime_type->True 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: host_bucket->anything.with.three.dots 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: multipart_chunk_size_mb->15 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: multipart_max_chunks->10000 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: recursive->False 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: recv_chunk->65536 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: send_chunk->65536 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: signature_v2->False 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: socket_timeout->300 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: use_https->False 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: use_mime_magic->True 2026-03-20T11:46:53.199 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: verbosity->WARNING 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config cache_file -> 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config follow_symlinks -> False 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config verbosity -> 10 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Command: ls 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: CreateRequest: resource[uri]=/ 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Using signature v4 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(None): vm00.local 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: canonical_headers = host:vm00.local 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114653Z 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Canonical Request: 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:GET 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:/ 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114653Z 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:host;x-amz-content-sha256;x-amz-date 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:---------------------- 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: signature-v4 headers: {'x-amz-date': '20260320T114653Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=8ad4b3b46ad9d13139367e9e5e0ec2302efb3af6a5e4210463e6a897b95e95fc', 'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'} 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Processing request, please wait... 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(None): vm00.local 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.get(): creating new connection: http://vm00.local 2026-03-20T11:46:53.200 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: non-proxied HTTPConnection(vm00.local, None) 2026-03-20T11:46:53.201 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: format_uri(): / 2026-03-20T11:46:53.201 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Sending request method_string='GET', uri='/', headers={'x-amz-date': '20260320T114653Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=8ad4b3b46ad9d13139367e9e5e0ec2302efb3af6a5e4210463e6a897b95e95fc', 'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'}, body=(0 bytes) 2026-03-20T11:46:53.202 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.put(): connection put back to pool (http://vm00.local#1) 2026-03-20T11:46:53.202 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Response: 2026-03-20T11:46:53.202 INFO:tasks.workunit.client.0.vm00.stderr:{'data': b'testidM. Testermulti' 2026-03-20T11:46:53.202 INFO:tasks.workunit.client.0.vm00.stderr: b'part-bkt2026-03-20T11:46:51.305Z' 2026-03-20T11:46:53.202 INFO:tasks.workunit.client.0.vm00.stderr: b'', 2026-03-20T11:46:53.202 INFO:tasks.workunit.client.0.vm00.stderr: 'headers': {'connection': 'Keep-Alive', 2026-03-20T11:46:53.202 INFO:tasks.workunit.client.0.vm00.stderr: 'content-type': 'application/xml', 2026-03-20T11:46:53.202 INFO:tasks.workunit.client.0.vm00.stderr: 'date': 'Fri, 20 Mar 2026 11:46:53 GMT', 2026-03-20T11:46:53.202 INFO:tasks.workunit.client.0.vm00.stderr: 'server': 'Ceph Object Gateway (tentacle)', 2026-03-20T11:46:53.202 INFO:tasks.workunit.client.0.vm00.stderr: 'transfer-encoding': 'chunked', 2026-03-20T11:46:53.202 INFO:tasks.workunit.client.0.vm00.stderr: 'x-amz-request-id': 'tx00000aebff41e3d6aab1b-0069bd33ad-4214-default'}, 2026-03-20T11:46:53.202 INFO:tasks.workunit.client.0.vm00.stderr: 'reason': 'OK', 2026-03-20T11:46:53.203 INFO:tasks.workunit.client.0.vm00.stderr: 'status': 200} 2026-03-20T11:46:53.203 INFO:tasks.workunit.client.0.vm00.stdout:2026-03-20 11:46 s3://multipart-bkt 2026-03-20T11:46:53.274 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: s3cmd version 2.4.0 2026-03-20T11:46:53.274 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: Reading file '/tmp/s3config.60947' 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: host_base->vm00.local 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: access_key->05...17_chars...4 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: secret_key->h7...53_chars...= 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: bucket_location->us-east-1 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: check_ssl_certificate->True 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: check_ssl_hostname->True 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: default_mime_type->binary/octet-stream 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: delete_removed->False 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: dry_run->False 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: enable_multipart->True 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: encoding->UTF-8 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: encrypt->False 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: follow_symlinks->False 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: force->False 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: guess_mime_type->True 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: host_bucket->anything.with.three.dots 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: multipart_chunk_size_mb->15 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: multipart_max_chunks->10000 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: recursive->False 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: recv_chunk->65536 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: send_chunk->65536 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: signature_v2->False 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: socket_timeout->300 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: use_https->False 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: use_mime_magic->True 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: verbosity->WARNING 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config cache_file -> 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config follow_symlinks -> False 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config verbosity -> 10 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Command: ls 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Bucket 's3://multipart-bkt': 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: CreateRequest: resource[uri]=/ 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Using signature v4 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: canonical_headers = host:vm00.local 2026-03-20T11:46:53.275 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114653Z 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Canonical Request: 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:GET 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:/multipart-bkt/ 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:delimiter=%2F 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114653Z 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:host;x-amz-content-sha256;x-amz-date 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:---------------------- 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: signature-v4 headers: {'x-amz-date': '20260320T114653Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=54e1796462359d2f0bea4d599cfdbaeebed7921c5dda552194ecc4b3406650f5', 'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'} 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Processing request, please wait... 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(multipart-bkt): vm00.local 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.get(): creating new connection: http://vm00.local 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: non-proxied HTTPConnection(vm00.local, None) 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: format_uri(): /multipart-bkt/?delimiter=%2F 2026-03-20T11:46:53.276 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Sending request method_string='GET', uri='/multipart-bkt/?delimiter=%2F', headers={'x-amz-date': '20260320T114653Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=54e1796462359d2f0bea4d599cfdbaeebed7921c5dda552194ecc4b3406650f5', 'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'}, body=(0 bytes) 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.put(): connection put back to pool (http://vm00.local#1) 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Response: 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr:{'data': b'multipart-bkt1000/falsemultipart-obj2026-03-20T11:46:53.000Z"a2cbe6f639f1af7' 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr: b'e093851495b6add34-4"53477376S' 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr: b'TANDARDtestidM. TesterNormal<' 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr: b'/ListBucketResult>', 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr: 'headers': {'connection': 'Keep-Alive', 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr: 'content-type': 'application/xml', 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr: 'date': 'Fri, 20 Mar 2026 11:46:53 GMT', 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr: 'server': 'Ceph Object Gateway (tentacle)', 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr: 'transfer-encoding': 'chunked', 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr: 'x-amz-request-id': 'tx000003b22de2c6fc5be51-0069bd33ad-4214-default'}, 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr: 'reason': 'OK', 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stderr: 'status': 200} 2026-03-20T11:46:53.278 INFO:tasks.workunit.client.0.vm00.stdout:2026-03-20 11:46 53477376 s3://multipart-bkt/multipart-obj 2026-03-20T11:46:53.354 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: s3cmd version 2.4.0 2026-03-20T11:46:53.354 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: Reading file '/tmp/s3config.60947' 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: host_base->vm00.local 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: access_key->05...17_chars...4 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: secret_key->h7...53_chars...= 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: bucket_location->us-east-1 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: check_ssl_certificate->True 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: check_ssl_hostname->True 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: default_mime_type->binary/octet-stream 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: delete_removed->False 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: dry_run->False 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: enable_multipart->True 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: encoding->UTF-8 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: encrypt->False 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: follow_symlinks->False 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: force->False 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: guess_mime_type->True 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: host_bucket->anything.with.three.dots 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: multipart_chunk_size_mb->15 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: multipart_max_chunks->10000 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: recursive->False 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: recv_chunk->65536 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: send_chunk->65536 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: signature_v2->False 2026-03-20T11:46:53.355 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: socket_timeout->300 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: use_https->False 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: use_mime_magic->True 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConfigParser: verbosity->WARNING 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config cache_file -> 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config follow_symlinks -> False 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Updating Config.Config verbosity -> 10 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Command: mb 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: CreateRequest: resource[uri]=/ 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Using signature v4 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(incomplete-mp-bkt-1): vm00.local 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: canonical_headers = host:vm00.local 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114653Z 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Canonical Request: 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:PUT 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:/incomplete-mp-bkt-1/ 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:host:vm00.local 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:x-amz-date:20260320T114653Z 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:host;x-amz-content-sha256;x-amz-date 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:---------------------- 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: signature-v4 headers: {'x-amz-date': '20260320T114653Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=055ad44a87a7d520734724fbbf33ca4eec3c208a136fd70bcf954e3e324423ae', 'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'} 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Processing request, please wait... 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: get_hostname(incomplete-mp-bkt-1): vm00.local 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.get(): creating new connection: http://vm00.local 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: non-proxied HTTPConnection(vm00.local, None) 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: format_uri(): /incomplete-mp-bkt-1/ 2026-03-20T11:46:53.356 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Sending request method_string='PUT', uri='/incomplete-mp-bkt-1/', headers={'x-amz-date': '20260320T114653Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=0555b35654ad1656d804/20260320/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=055ad44a87a7d520734724fbbf33ca4eec3c208a136fd70bcf954e3e324423ae', 'x-amz-content-sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855'}, body=(0 bytes) 2026-03-20T11:46:53.365 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: ConnMan.put(): connection put back to pool (http://vm00.local#1) 2026-03-20T11:46:53.365 INFO:tasks.workunit.client.0.vm00.stderr:DEBUG: Response: 2026-03-20T11:46:53.365 INFO:tasks.workunit.client.0.vm00.stderr:{'data': b'', 2026-03-20T11:46:53.365 INFO:tasks.workunit.client.0.vm00.stderr: 'headers': {'connection': 'Keep-Alive', 2026-03-20T11:46:53.365 INFO:tasks.workunit.client.0.vm00.stderr: 'content-length': '0', 2026-03-20T11:46:53.365 INFO:tasks.workunit.client.0.vm00.stderr: 'date': 'Fri, 20 Mar 2026 11:46:53 GMT', 2026-03-20T11:46:53.365 INFO:tasks.workunit.client.0.vm00.stderr: 'server': 'Ceph Object Gateway (tentacle)', 2026-03-20T11:46:53.365 INFO:tasks.workunit.client.0.vm00.stderr: 'x-amz-request-id': 'tx000002b608d13721a71ed-0069bd33ad-4214-default'}, 2026-03-20T11:46:53.365 INFO:tasks.workunit.client.0.vm00.stderr: 'reason': 'OK', 2026-03-20T11:46:53.365 INFO:tasks.workunit.client.0.vm00.stderr: 'status': 200} 2026-03-20T11:46:53.366 INFO:tasks.workunit.client.0.vm00.stdout:Bucket 's3://incomplete-mp-bkt-1/' created 2026-03-20T11:46:55.018 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61254 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:55.228 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61269 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:55.441 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61284 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:55.656 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61299 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:55.872 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61314 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:56.087 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61329 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:56.311 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61344 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:56.541 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61359 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:56.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61374 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:56.988 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61389 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:57.223 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61404 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:57.447 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61419 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:57.668 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61434 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:57.880 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61449 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:58.100 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61464 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:58.313 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61480 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:58.532 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61495 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:58.622 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61510 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:58.993 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61526 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:59.209 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61541 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:59.427 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61556 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:59.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61571 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:46:59.862 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61586 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:00.086 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61601 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:00.295 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61616 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:00.381 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61632 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:00.715 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61647 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:00.920 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61662 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:01.136 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61677 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:01.351 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61692 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:01.575 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61707 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:01.790 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61722 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:02.006 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61737 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:02.223 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61752 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:02.439 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61767 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:02.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61782 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:02.866 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61798 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:03.079 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61813 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:03.286 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61828 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:03.501 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61843 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:03.717 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61858 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:03.959 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61873 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:04.172 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61888 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:04.433 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61903 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:05.134 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61918 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:05.341 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61933 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:05.547 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61948 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:05.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61963 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:05.990 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61978 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:06.201 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 61993 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:06.403 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62008 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:06.611 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62023 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:06.821 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62038 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:07.032 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62053 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:07.248 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62068 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:07.453 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62083 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:07.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62098 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:07.875 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62113 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:08.090 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62128 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:08.313 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62143 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:08.526 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62158 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:08.794 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62173 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:09.011 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62188 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:09.218 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62203 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:09.431 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62218 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:09.639 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62233 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:09.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62248 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:10.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62263 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:10.276 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62278 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:10.522 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62293 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:10.733 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62308 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:10.947 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62323 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:11.163 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62338 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:11.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62353 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:11.586 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62368 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:11.872 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62383 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:12.108 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62399 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:12.325 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62414 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:12.538 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62429 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:12.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62444 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:12.962 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62459 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:13.182 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62474 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:13.399 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62489 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:13.615 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62504 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:13.825 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62519 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:14.036 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62534 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:14.254 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62549 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:14.470 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62564 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:14.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62579 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:14.905 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62595 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:15.121 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62610 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:15.350 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62625 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:15.753 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62640 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:15.971 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62655 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:16.185 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62670 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:16.398 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62685 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:16.620 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62700 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:16.844 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62715 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:17.056 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62730 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:17.271 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62745 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:17.535 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62760 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:17.752 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62777 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:17.961 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62792 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:18.044 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62807 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:18.383 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62822 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:18.601 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62837 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:18.823 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62852 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:19.045 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62867 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:19.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62882 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:19.502 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62898 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:19.723 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62913 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:19.941 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62928 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:20.228 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62943 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:20.449 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62958 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:20.672 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62973 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:20.895 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 62988 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:21.126 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63003 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:21.340 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63018 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:21.429 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63033 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:21.776 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63048 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:21.992 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63063 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:22.220 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63078 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:22.428 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63093 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:22.645 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63108 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:22.860 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63123 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:23.075 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63138 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:23.295 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63153 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:23.513 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63168 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:23.734 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63183 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:23.954 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63199 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:24.175 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63214 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:24.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63229 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:24.619 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63244 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:24.709 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63259 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:25.057 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63274 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:25.272 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63289 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:25.491 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63304 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:25.708 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63319 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:25.922 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63334 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:26.129 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63349 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:26.340 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63364 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:26.551 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63379 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:26.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63394 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:26.981 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63410 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:27.194 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63425 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:27.409 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63440 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:27.629 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63455 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:27.847 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63470 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:28.061 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63486 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:28.272 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63501 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:28.535 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63516 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:28.751 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63531 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:28.838 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63546 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:29.200 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63562 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:29.429 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63577 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:29.646 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63593 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:29.733 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63608 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:30.083 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63623 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:30.300 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63638 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:30.511 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63654 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:30.782 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63669 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:31.005 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63684 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:31.220 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63699 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:31.447 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63714 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:31.671 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63730 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:31.892 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63745 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:32.113 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63760 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:32.326 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63775 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:32.547 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63790 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:32.774 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63805 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:32.993 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63820 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:33.215 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63835 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:33.422 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63851 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:33.636 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63866 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:33.855 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63882 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:34.071 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63897 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:34.159 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63912 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:34.505 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63927 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:34.724 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63942 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:34.939 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63957 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:35.161 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63972 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:35.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 63987 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:35.589 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64002 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:35.807 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64017 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:36.015 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64032 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:36.236 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64047 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:36.451 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64062 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:36.670 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64077 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:36.877 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64092 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:37.100 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64107 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:37.317 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64122 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:37.525 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64137 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:37.742 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64152 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:37.952 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64167 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:38.170 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64182 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:38.381 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64198 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:38.595 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64214 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:38.822 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64229 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:39.034 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64244 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:39.253 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64259 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:39.468 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64274 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:39.680 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64289 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:39.891 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64304 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:39.976 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64319 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:40.357 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64334 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:40.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64349 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:40.934 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64364 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:41.145 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64379 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:41.357 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64394 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:41.572 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64410 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:41.792 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64425 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:42.074 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64440 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:42.354 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64455 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:42.604 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64470 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:42.830 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64485 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:43.096 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64502 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:43.184 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64518 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:43.524 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64533 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:43.739 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64548 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:43.952 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64563 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:44.166 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64578 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:44.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64593 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:44.586 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64608 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:44.809 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64623 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:45.019 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64638 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:45.243 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64653 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:45.459 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64668 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:45.676 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64683 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:45.889 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64698 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:46.107 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64713 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:46.320 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64728 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:46.787 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64744 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:47.010 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64759 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:47.299 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64774 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:47.517 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64789 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:47.744 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64804 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:47.960 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64819 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:48.169 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64834 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:48.432 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64850 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:48.652 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64865 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:48.874 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64880 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:49.104 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64895 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:49.192 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64910 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:49.533 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64925 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:49.629 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64940 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:50.104 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64956 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:50.326 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64971 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:50.550 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 64986 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:50.782 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65001 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:51.006 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65016 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:51.230 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65032 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:51.450 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65047 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:51.671 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65062 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:51.890 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65077 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:52.189 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65092 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:52.427 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65107 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:52.659 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65122 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:52.918 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65137 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:53.212 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65152 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:53.465 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65168 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:53.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65183 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:53.924 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65198 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:54.165 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65214 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:54.417 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65229 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:54.646 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65244 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:54.940 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65259 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:55.180 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65274 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:55.266 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65289 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:55.708 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65304 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:55.932 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65319 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:56.156 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65334 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:56.379 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65349 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:56.461 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65364 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:56.830 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65379 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:57.051 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65394 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:57.268 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65409 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:57.546 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65424 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:57.780 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65439 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:58.000 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65454 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:58.213 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65469 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:58.433 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65484 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:58.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65499 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:58.875 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65514 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:59.101 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65529 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:59.319 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65544 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:59.534 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65559 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:59.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65574 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:47:59.994 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65589 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:00.220 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65605 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:00.305 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65620 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:00.663 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65635 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:00.914 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65650 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:01.148 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65665 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:01.374 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65680 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:01.589 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65695 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:01.822 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65710 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:02.044 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65725 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:02.270 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65740 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:02.527 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65755 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:02.889 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65770 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:03.114 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65785 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:03.343 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65800 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:03.565 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65815 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:03.797 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65830 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:04.015 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65845 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:04.238 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65861 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:04.453 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65876 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:04.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65891 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:04.889 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65906 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:05.113 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65921 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:05.332 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65936 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:05.557 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65951 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:05.849 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65966 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:06.069 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65981 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:06.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 65996 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:06.512 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66011 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:06.732 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66026 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:06.947 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66041 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:07.168 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66056 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:07.379 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66071 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:07.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66086 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:07.845 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66101 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:08.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66116 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:08.298 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66131 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:08.532 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66146 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:08.756 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66162 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:08.845 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66177 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:09.223 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66193 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:09.441 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66208 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:09.664 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66223 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:09.883 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66238 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:10.096 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66253 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:10.368 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66268 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:10.459 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66283 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:10.842 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66298 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:11.066 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66313 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:11.294 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66328 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:11.523 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66343 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:11.743 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66358 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:11.970 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66373 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:12.190 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66388 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:12.495 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66403 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:12.741 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66418 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:12.955 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66433 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:13.038 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66449 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:13.380 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66464 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:13.628 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66479 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:13.855 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66494 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:14.067 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66509 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:14.293 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66524 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:14.507 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66539 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:14.721 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66554 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:14.938 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66569 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:15.157 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66585 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:15.368 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66600 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:15.452 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66615 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:15.667 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66630 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:16.019 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66645 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:16.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66660 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:16.459 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66675 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:16.679 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66690 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:16.900 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66705 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:17.118 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66720 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:17.329 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66735 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:17.590 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66750 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:17.814 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66765 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:18.027 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66781 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:18.242 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66797 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:18.461 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66812 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:18.675 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66827 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:18.891 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66842 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:19.115 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66857 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:19.329 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66872 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:19.563 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66887 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:19.784 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66902 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:20.008 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66917 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:20.227 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66932 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:20.445 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66947 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:20.665 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66962 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:20.892 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66977 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:21.112 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 66992 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:21.332 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67007 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:21.559 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67023 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:21.777 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67038 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:21.997 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67053 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:22.233 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67068 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:22.332 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67083 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:22.690 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67098 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:22.914 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67113 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:23.134 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67128 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:23.353 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67143 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:23.580 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67158 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:23.926 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67173 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:24.141 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67188 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:24.356 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67203 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:24.571 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67218 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:24.794 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67233 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:25.017 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67248 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:25.110 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67263 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:25.331 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67278 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:25.688 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67293 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:25.997 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67308 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:26.218 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67324 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:26.439 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67339 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:26.668 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67354 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:26.985 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67369 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:27.220 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67384 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:27.437 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67399 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:27.665 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67414 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:27.977 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67429 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:28.207 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67444 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:28.426 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67459 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:28.656 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67474 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:28.879 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67489 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:29.201 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67504 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:29.439 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67519 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:29.651 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67534 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:29.886 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67549 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:30.102 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67564 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:30.313 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67580 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:30.531 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67595 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:30.745 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67610 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:30.968 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67625 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:31.056 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67640 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:31.435 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67655 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:31.661 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67670 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:31.902 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67685 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:32.128 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67700 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:32.349 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67715 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:32.576 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67730 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:32.799 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67745 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:32.890 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67760 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:33.256 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67775 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:33.347 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67790 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:33.734 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67805 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:33.960 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67820 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:34.197 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67835 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:34.425 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67850 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:34.665 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67865 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:34.898 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67880 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:34.995 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67895 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:35.378 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67910 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:35.631 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67925 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:35.886 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67940 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:36.125 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67955 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:36.366 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67970 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:36.632 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 67985 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:36.876 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68000 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:37.128 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68015 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:37.363 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68030 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:37.654 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68045 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:37.887 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68060 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:38.129 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68075 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:38.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68090 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:38.616 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68105 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:38.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68120 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:39.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68135 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:39.334 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68150 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:39.596 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68165 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:39.828 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68180 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:40.061 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68195 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:40.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68210 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:40.662 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68225 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:40.865 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68240 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:41.338 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68255 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:41.667 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68270 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:42.045 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68285 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:42.643 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68300 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:42.980 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68315 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:43.310 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68330 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:43.628 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68345 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:43.857 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68360 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:44.081 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68375 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:44.302 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68390 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:44.539 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68405 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:44.774 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68420 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:45.001 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68435 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:45.256 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68450 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:45.472 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68465 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:45.741 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68480 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:46.015 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68495 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:46.238 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68510 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:46.465 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68525 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:46.685 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68540 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:46.922 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68555 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:47.338 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68570 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:47.635 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68585 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:48.040 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68600 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:48.260 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68615 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:48.502 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68630 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:48.750 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68645 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:49.107 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68660 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:49.413 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68675 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:49.792 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68690 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:50.640 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68705 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:50.986 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68720 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:51.183 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68735 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:51.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68750 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:52.002 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68765 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:52.323 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68780 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:52.568 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68795 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:52.821 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68810 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:52.948 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68825 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:53.309 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68840 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:53.535 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68855 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:53.768 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68870 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:54.363 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68885 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:54.465 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68900 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:55.022 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68915 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:55.115 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68930 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:55.849 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68945 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:56.241 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68960 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:56.579 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68975 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:56.672 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 68998 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:57.063 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69013 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:57.312 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69028 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:57.601 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69043 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:57.830 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69058 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:58.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69073 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:58.293 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69088 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:58.517 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69103 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:58.759 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69118 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:58.992 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69133 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:59.218 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69148 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:59.448 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69163 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:59.670 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69178 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:48:59.896 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69194 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:00.125 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69209 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:00.349 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69224 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:00.580 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69239 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:00.673 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69254 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:01.074 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69269 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:01.295 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69284 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:01.515 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69299 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:02.644 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69314 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:02.964 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69329 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:03.294 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69344 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:03.617 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69359 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:03.958 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69374 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:04.269 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69389 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:04.613 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69404 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:04.784 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69419 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:05.276 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69434 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:05.514 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69449 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:05.745 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69464 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:05.989 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69479 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:06.219 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69494 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:06.455 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69509 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:06.696 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69524 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:07.272 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69539 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:08.170 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69554 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:08.998 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69569 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:09.239 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69584 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:09.460 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69599 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:09.684 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69614 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:09.912 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69629 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:10.139 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69644 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:10.361 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69659 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:10.446 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69674 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:10.800 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69689 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:11.028 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69704 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:11.248 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69719 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:11.496 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69734 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:11.726 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69749 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:11.952 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69764 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:12.294 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69779 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:12.626 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69794 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:13.002 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69809 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:13.301 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69824 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:13.622 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69839 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:13.901 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69854 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:14.217 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69869 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:14.491 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69884 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:14.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69899 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:15.141 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69914 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:15.464 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69929 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:15.850 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69944 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:16.173 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69959 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:16.486 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69974 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:16.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 69989 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:17.157 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70004 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:17.479 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70019 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:17.603 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70034 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:18.021 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70049 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:18.313 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70064 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:18.565 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70079 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:18.834 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70094 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:19.115 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70109 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:19.706 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70124 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:20.110 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70139 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:20.537 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70154 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:20.896 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70169 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:21.218 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70184 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:21.586 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70199 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:21.899 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70214 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:22.272 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70229 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:22.519 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70244 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:22.806 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70259 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:23.058 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70274 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:23.333 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70289 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:23.619 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70304 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:23.875 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70319 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:24.144 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70334 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:24.402 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70349 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:24.505 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70364 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:24.909 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70379 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:25.157 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70394 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:25.408 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70409 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:25.644 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70424 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:25.879 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70439 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:26.205 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70454 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:26.566 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70469 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:26.896 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70484 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:27.218 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70499 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:27.555 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70514 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:27.887 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70529 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:28.226 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70544 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:28.550 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70559 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:28.876 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70574 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:29.211 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70589 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:29.535 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70604 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:29.871 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70619 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:30.216 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70634 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:30.384 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70649 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:30.865 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70664 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:31.034 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70679 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:31.453 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70694 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:31.677 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70709 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:31.901 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70724 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:32.128 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70739 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:32.348 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70754 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:32.567 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70769 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:32.798 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70784 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:33.029 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70799 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:33.258 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70814 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:33.474 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70829 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:33.694 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70844 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:33.914 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70859 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:34.134 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70874 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:34.353 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70889 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:34.567 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70904 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:34.783 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70919 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:35.001 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70934 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:35.229 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70949 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:35.453 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70964 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:35.700 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70980 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:35.923 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 70995 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:36.144 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71010 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:36.367 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71025 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:36.587 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71040 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:36.810 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71055 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:37.045 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71070 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:37.271 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71085 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:37.491 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71100 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:37.705 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71115 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:37.923 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71130 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:38.156 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71145 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:38.380 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71160 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:38.598 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71175 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:38.817 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71190 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:39.053 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71205 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:39.280 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71220 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:39.495 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71235 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:39.715 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71250 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:39.984 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71265 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:40.206 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71280 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:40.424 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71295 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:40.732 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71310 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:40.949 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71325 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:41.177 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71340 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:41.484 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71355 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:41.756 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71370 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:41.976 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71385 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:42.195 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71400 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:42.410 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71415 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:42.624 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71430 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:42.840 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71445 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:43.072 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71460 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:43.289 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71475 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:43.505 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71490 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:43.722 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71505 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:43.947 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71520 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:44.169 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71535 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:44.389 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71550 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:44.601 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71565 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:44.815 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71580 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:45.024 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71595 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:45.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71610 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:45.464 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71625 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:45.672 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71640 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:45.887 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71655 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:46.106 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71670 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:46.330 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71685 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:46.538 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71700 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:46.625 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71715 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:46.979 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71730 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:47.200 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71745 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:47.415 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71760 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:47.647 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71775 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:47.869 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71790 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:48.086 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71805 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:48.301 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71820 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:48.518 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71835 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:48.739 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71850 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:48.955 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71865 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:49.169 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71880 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:49.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71895 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:49.592 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71910 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:49.810 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71925 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:50.041 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71940 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:50.261 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71955 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:50.344 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71970 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:50.680 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 71985 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:50.885 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72000 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:51.103 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72015 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:51.318 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72030 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:51.522 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72045 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:51.788 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72060 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:51.995 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72075 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:52.208 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72090 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:52.420 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72105 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:52.632 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72120 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:52.844 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72135 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:53.063 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72150 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:53.278 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72165 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:53.500 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72180 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:53.715 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72195 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:53.925 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72210 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:54.139 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72225 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:54.370 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72240 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:54.581 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72255 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:54.844 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72270 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:55.068 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72285 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:55.293 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72300 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:55.505 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72315 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:55.721 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72330 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:55.937 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72345 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:56.154 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72360 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:56.388 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72375 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:56.600 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72390 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:56.830 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72405 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:57.045 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72420 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:57.259 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72435 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:57.469 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72450 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:57.681 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72465 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:57.896 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72480 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:58.117 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72495 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:58.323 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72510 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:58.537 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72525 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:58.741 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72540 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:58.954 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72555 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:59.174 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72570 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:59.392 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72585 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:59.618 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72600 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:49:59.833 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72615 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:00.052 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72630 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:00.268 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72645 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:00.476 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72660 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:00.690 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72675 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:00.904 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72690 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:01.336 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72705 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:01.557 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72720 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:01.644 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72735 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:02.003 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72750 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:02.229 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72765 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:02.445 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72780 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:02.671 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72795 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:02.893 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72810 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:03.121 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72825 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:03.341 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72840 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:03.607 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72855 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:03.820 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72870 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:03.910 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72885 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:04.286 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72900 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:04.524 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72915 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:04.626 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72930 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:05.032 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72945 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:05.260 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72961 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:05.483 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72976 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:05.697 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 72991 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:05.928 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73006 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:06.025 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73021 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73036 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:06.601 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73051 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:06.938 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73066 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:07.164 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73081 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:07.375 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73096 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:07.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73111 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:07.832 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73126 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:08.053 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73141 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:08.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73156 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:08.498 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73171 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:08.714 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73186 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:08.806 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73201 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:09.183 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73216 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:09.414 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73231 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:09.639 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73246 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:09.853 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73261 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:10.111 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73276 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:10.322 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73291 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:10.533 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73306 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:10.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73321 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:10.974 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73336 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:11.194 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73351 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:11.528 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73366 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:11.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73381 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:12.279 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73396 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:12.500 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73411 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:12.717 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73426 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:12.934 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73441 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:13.158 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73456 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:13.374 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73471 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:13.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73486 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:13.834 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73501 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:14.068 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73516 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:14.285 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73531 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:14.566 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73546 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:14.787 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73561 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:15.005 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73576 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:15.232 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73591 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:15.457 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73606 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:15.732 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73621 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:16.012 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73636 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:16.248 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73651 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:16.469 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73666 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:16.732 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73681 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:16.997 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73696 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:17.254 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73711 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:17.527 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73726 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:17.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73741 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:18.022 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73756 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:18.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73771 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:18.468 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73786 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:18.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73801 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:18.936 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73816 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:19.161 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73831 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:19.402 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73846 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:19.633 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73861 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:19.731 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73876 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:20.076 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73891 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:20.310 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73906 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:20.525 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73921 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:20.738 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73936 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:20.961 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73951 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:21.174 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73966 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:21.420 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73981 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:21.637 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 73996 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:21.851 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 74011 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:22.076 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 74026 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:22.305 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 74041 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:22.528 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 74056 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:22.755 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 74071 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:23.052 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 74086 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:23.287 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/workunits/rgw/test_rgw_orphan_list.sh: line 159: 74101 Killed s3cmd --config=${s3config} put $local_file s3://${remote_bkt}/${remote_obj} --progress --multipart-chunk-size-mb=5 > $fifo 2026-03-20T11:50:23.444 DEBUG:teuthology.exit:Got signal 15; running 1 handler... 2026-03-20T11:50:23.445 DEBUG:teuthology.exit:Finished running handlers