2026-03-08T22:38:16.154 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-08T22:38:16.157 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-08T22:38:16.176 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-08_21:49:43-rados:standalone-squid-none-default-vps/279 branch: squid description: rados:standalone/{supported-random-distro$/{centos_latest} workloads/mon-stretch} email: null first_in_suite: false flavor: default job_id: '279' last_in_suite: false machine_type: vps name: kyr-2026-03-08_21:49:43-rados:standalone-squid-none-default-vps no_nested_subset: false openstack: - volumes: count: 3 size: 10 os_type: centos os_version: 9.stream overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: mgr: debug mgr: 20 debug ms: 1 mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath selinux: allowlist: - scontext=system_u:system_r:getty_t:s0 workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mgr.x - osd.0 - osd.1 - osd.2 - client.0 seed: 5909 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 suite: rados:standalone suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm00.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPvw+uAJvV2DAHrPd1QtZFg3pcBzmCS1zicaKPjjAWW5frhEGwJI/zpdqYAqAkoDgtkkW/XiPKPUDLcRQPRruIM= tasks: - install: null - workunit: basedir: qa/standalone clients: all: - mon-stretch teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-08_21:49:43 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-08T22:38:16.176 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-08T22:38:16.177 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-08T22:38:16.177 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-08T22:38:16.177 INFO:teuthology.task.internal:Checking packages... 2026-03-08T22:38:16.177 INFO:teuthology.task.internal:Checking packages for os_type 'centos', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-08T22:38:16.177 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-08T22:38:16.177 INFO:teuthology.packaging:ref: None 2026-03-08T22:38:16.177 INFO:teuthology.packaging:tag: None 2026-03-08T22:38:16.177 INFO:teuthology.packaging:branch: squid 2026-03-08T22:38:16.177 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T22:38:16.177 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&ref=squid 2026-03-08T22:38:16.904 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678.ge911bdeb 2026-03-08T22:38:16.905 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-08T22:38:16.906 INFO:teuthology.task.internal:no buildpackages task found 2026-03-08T22:38:16.906 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-08T22:38:16.906 INFO:teuthology.task.internal:Saving configuration 2026-03-08T22:38:16.910 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-08T22:38:16.911 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-08T22:38:16.917 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm00.local', 'description': '/archive/kyr-2026-03-08_21:49:43-rados:standalone-squid-none-default-vps/279', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'centos', 'os_version': '9.stream', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-08 22:37:39.712995', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:00', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPvw+uAJvV2DAHrPd1QtZFg3pcBzmCS1zicaKPjjAWW5frhEGwJI/zpdqYAqAkoDgtkkW/XiPKPUDLcRQPRruIM='} 2026-03-08T22:38:16.917 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-08T22:38:16.918 INFO:teuthology.task.internal:roles: ubuntu@vm00.local - ['mon.a', 'mgr.x', 'osd.0', 'osd.1', 'osd.2', 'client.0'] 2026-03-08T22:38:16.918 INFO:teuthology.run_tasks:Running task console_log... 2026-03-08T22:38:16.925 DEBUG:teuthology.task.console_log:vm00 does not support IPMI; excluding 2026-03-08T22:38:16.925 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fa820ef4af0>, signals=[15]) 2026-03-08T22:38:16.925 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-08T22:38:16.926 INFO:teuthology.task.internal:Opening connections... 2026-03-08T22:38:16.926 DEBUG:teuthology.task.internal:connecting to ubuntu@vm00.local 2026-03-08T22:38:16.926 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-08T22:38:16.987 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-08T22:38:16.988 DEBUG:teuthology.orchestra.run.vm00:> uname -m 2026-03-08T22:38:17.163 INFO:teuthology.orchestra.run.vm00.stdout:x86_64 2026-03-08T22:38:17.164 DEBUG:teuthology.orchestra.run.vm00:> cat /etc/os-release 2026-03-08T22:38:17.219 INFO:teuthology.orchestra.run.vm00.stdout:NAME="CentOS Stream" 2026-03-08T22:38:17.219 INFO:teuthology.orchestra.run.vm00.stdout:VERSION="9" 2026-03-08T22:38:17.219 INFO:teuthology.orchestra.run.vm00.stdout:ID="centos" 2026-03-08T22:38:17.219 INFO:teuthology.orchestra.run.vm00.stdout:ID_LIKE="rhel fedora" 2026-03-08T22:38:17.219 INFO:teuthology.orchestra.run.vm00.stdout:VERSION_ID="9" 2026-03-08T22:38:17.219 INFO:teuthology.orchestra.run.vm00.stdout:PLATFORM_ID="platform:el9" 2026-03-08T22:38:17.219 INFO:teuthology.orchestra.run.vm00.stdout:PRETTY_NAME="CentOS Stream 9" 2026-03-08T22:38:17.219 INFO:teuthology.orchestra.run.vm00.stdout:ANSI_COLOR="0;31" 2026-03-08T22:38:17.219 INFO:teuthology.orchestra.run.vm00.stdout:LOGO="fedora-logo-icon" 2026-03-08T22:38:17.219 INFO:teuthology.orchestra.run.vm00.stdout:CPE_NAME="cpe:/o:centos:centos:9" 2026-03-08T22:38:17.220 INFO:teuthology.orchestra.run.vm00.stdout:HOME_URL="https://centos.org/" 2026-03-08T22:38:17.220 INFO:teuthology.orchestra.run.vm00.stdout:BUG_REPORT_URL="https://issues.redhat.com/" 2026-03-08T22:38:17.220 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 9" 2026-03-08T22:38:17.220 INFO:teuthology.orchestra.run.vm00.stdout:REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream" 2026-03-08T22:38:17.220 INFO:teuthology.lock.ops:Updating vm00.local on lock server 2026-03-08T22:38:17.224 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-08T22:38:17.226 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-08T22:38:17.227 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-08T22:38:17.227 DEBUG:teuthology.orchestra.run.vm00:> test '!' -e /home/ubuntu/cephtest 2026-03-08T22:38:17.275 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-08T22:38:17.276 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-08T22:38:17.276 DEBUG:teuthology.orchestra.run.vm00:> test -z $(ls -A /var/lib/ceph) 2026-03-08T22:38:17.331 INFO:teuthology.orchestra.run.vm00.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-08T22:38:17.331 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-08T22:38:17.343 DEBUG:teuthology.orchestra.run.vm00:> test -e /ceph-qa-ready 2026-03-08T22:38:17.389 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T22:38:17.588 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-08T22:38:17.590 INFO:teuthology.task.internal:Creating test directory... 2026-03-08T22:38:17.590 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-08T22:38:17.605 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-08T22:38:17.606 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-08T22:38:17.610 INFO:teuthology.task.internal:Creating archive directory... 2026-03-08T22:38:17.610 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-08T22:38:17.662 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-08T22:38:17.663 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-08T22:38:17.663 DEBUG:teuthology.orchestra.run.vm00:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-08T22:38:17.716 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T22:38:17.716 DEBUG:teuthology.orchestra.run.vm00:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-08T22:38:17.780 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:38:17.788 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:38:17.790 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-08T22:38:17.791 INFO:teuthology.task.internal:Configuring sudo... 2026-03-08T22:38:17.791 DEBUG:teuthology.orchestra.run.vm00:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-08T22:38:17.853 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-08T22:38:17.855 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-08T22:38:17.855 DEBUG:teuthology.orchestra.run.vm00:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-08T22:38:17.907 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-08T22:38:17.971 DEBUG:teuthology.orchestra.run.vm00:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-08T22:38:18.025 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-08T22:38:18.025 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-08T22:38:18.088 DEBUG:teuthology.orchestra.run.vm00:> sudo service rsyslog restart 2026-03-08T22:38:18.154 INFO:teuthology.orchestra.run.vm00.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-08T22:38:18.429 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-08T22:38:18.431 INFO:teuthology.task.internal:Starting timer... 2026-03-08T22:38:18.431 INFO:teuthology.run_tasks:Running task pcp... 2026-03-08T22:38:18.434 INFO:teuthology.run_tasks:Running task selinux... 2026-03-08T22:38:18.436 DEBUG:teuthology.task:Applying overrides for task selinux: {'allowlist': ['scontext=system_u:system_r:getty_t:s0']} 2026-03-08T22:38:18.436 INFO:teuthology.task.selinux:Excluding vm00: VMs are not yet supported 2026-03-08T22:38:18.436 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-08T22:38:18.436 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-08T22:38:18.436 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-08T22:38:18.436 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-08T22:38:18.437 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-08T22:38:18.438 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-08T22:38:18.439 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-08T22:38:19.076 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-08T22:38:19.081 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-08T22:38:19.081 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventorymckuob98 --limit vm00.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-08T22:40:28.783 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm00.local')] 2026-03-08T22:40:28.783 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm00.local' 2026-03-08T22:40:28.784 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm00.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-08T22:40:28.848 DEBUG:teuthology.orchestra.run.vm00:> true 2026-03-08T22:40:28.929 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm00.local' 2026-03-08T22:40:28.929 INFO:teuthology.run_tasks:Running task clock... 2026-03-08T22:40:28.932 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-08T22:40:28.932 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-08T22:40:28.932 DEBUG:teuthology.orchestra.run.vm00:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-08T22:40:29.009 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntp.service: Unit ntp.service not loaded. 2026-03-08T22:40:29.025 INFO:teuthology.orchestra.run.vm00.stderr:Failed to stop ntpd.service: Unit ntpd.service not loaded. 2026-03-08T22:40:29.049 INFO:teuthology.orchestra.run.vm00.stderr:sudo: ntpd: command not found 2026-03-08T22:40:29.065 INFO:teuthology.orchestra.run.vm00.stdout:506 Cannot talk to daemon 2026-03-08T22:40:29.085 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntp.service: Unit ntp.service not found. 2026-03-08T22:40:29.104 INFO:teuthology.orchestra.run.vm00.stderr:Failed to start ntpd.service: Unit ntpd.service not found. 2026-03-08T22:40:29.157 INFO:teuthology.orchestra.run.vm00.stderr:bash: line 1: ntpq: command not found 2026-03-08T22:40:29.164 INFO:teuthology.orchestra.run.vm00.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-08T22:40:29.164 INFO:teuthology.orchestra.run.vm00.stdout:=============================================================================== 2026-03-08T22:40:29.164 INFO:teuthology.orchestra.run.vm00.stdout:^? vps-fra1.orleans.ddnss.de 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-08T22:40:29.164 INFO:teuthology.orchestra.run.vm00.stdout:^? ntp01.pingless.com 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-08T22:40:29.164 INFO:teuthology.orchestra.run.vm00.stdout:^? ntp5.kernfusion.at 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-08T22:40:29.164 INFO:teuthology.orchestra.run.vm00.stdout:^? ntp1.intra2net.com 0 6 0 - +0ns[ +0ns] +/- 0ns 2026-03-08T22:40:29.164 INFO:teuthology.run_tasks:Running task install... 2026-03-08T22:40:29.167 DEBUG:teuthology.task.install:project ceph 2026-03-08T22:40:29.167 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-08T22:40:29.167 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-08T22:40:29.167 INFO:teuthology.task.install:Using flavor: default 2026-03-08T22:40:29.170 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-08T22:40:29.170 INFO:teuthology.task.install:extra packages: [] 2026-03-08T22:40:29.170 DEBUG:teuthology.task.install.rpm:_update_package_list_and_install: config is {'branch': None, 'cleanup': None, 'debuginfo': None, 'downgrade_packages': [], 'exclude_packages': [], 'extra_packages': [], 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}, 'extras': None, 'enable_coprs': [], 'flavor': 'default', 'install_ceph_packages': True, 'packages': {}, 'project': 'ceph', 'repos_only': False, 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'tag': None, 'wait_for_package': False} 2026-03-08T22:40:29.170 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T22:40:29.799 INFO:teuthology.task.install.rpm:Pulling from https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/ 2026-03-08T22:40:29.799 INFO:teuthology.task.install.rpm:Package version is 19.2.3-678.ge911bdeb 2026-03-08T22:40:30.353 INFO:teuthology.packaging:Writing yum repo: [ceph] name=ceph packages for $basearch baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/$basearch enabled=1 gpgcheck=0 type=rpm-md [ceph-noarch] name=ceph noarch packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/noarch enabled=1 gpgcheck=0 type=rpm-md [ceph-source] name=ceph source packages baseurl=https://3.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/centos/9/flavors/default/SRPMS enabled=1 gpgcheck=0 type=rpm-md 2026-03-08T22:40:30.353 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-08T22:40:30.353 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/etc/yum.repos.d/ceph.repo 2026-03-08T22:40:30.396 INFO:teuthology.task.install.rpm:Installing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd, bzip2, perl-Test-Harness, python3-xmltodict, python3-jmespath on remote rpm x86_64 2026-03-08T22:40:30.397 DEBUG:teuthology.orchestra.run.vm00:> if test -f /etc/yum.repos.d/ceph.repo ; then sudo sed -i -e ':a;N;$!ba;s/enabled=1\ngpg/enabled=1\npriority=1\ngpg/g' -e 's;ref/[a-zA-Z0-9_-]*/;sha1/e911bdebe5c8faa3800735d1568fcdca65db60df/;g' /etc/yum.repos.d/ceph.repo ; fi 2026-03-08T22:40:30.474 DEBUG:teuthology.orchestra.run.vm00:> sudo touch -a /etc/yum/pluginconf.d/priorities.conf ; test -e /etc/yum/pluginconf.d/priorities.conf.orig || sudo cp -af /etc/yum/pluginconf.d/priorities.conf /etc/yum/pluginconf.d/priorities.conf.orig 2026-03-08T22:40:30.565 DEBUG:teuthology.orchestra.run.vm00:> grep check_obsoletes /etc/yum/pluginconf.d/priorities.conf && sudo sed -i 's/check_obsoletes.*0/check_obsoletes = 1/g' /etc/yum/pluginconf.d/priorities.conf || echo 'check_obsoletes = 1' | sudo tee -a /etc/yum/pluginconf.d/priorities.conf 2026-03-08T22:40:30.602 INFO:teuthology.orchestra.run.vm00.stdout:check_obsoletes = 1 2026-03-08T22:40:30.606 DEBUG:teuthology.orchestra.run.vm00:> sudo yum clean all 2026-03-08T22:40:30.826 INFO:teuthology.orchestra.run.vm00.stdout:41 files removed 2026-03-08T22:40:30.862 DEBUG:teuthology.orchestra.run.vm00:> sudo yum -y install ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd bzip2 perl-Test-Harness python3-xmltodict python3-jmespath 2026-03-08T22:40:32.229 INFO:teuthology.orchestra.run.vm00.stdout:ceph packages for x86_64 72 kB/s | 84 kB 00:01 2026-03-08T22:40:33.309 INFO:teuthology.orchestra.run.vm00.stdout:ceph noarch packages 11 kB/s | 12 kB 00:01 2026-03-08T22:40:34.244 INFO:teuthology.orchestra.run.vm00.stdout:ceph source packages 2.1 kB/s | 1.9 kB 00:00 2026-03-08T22:40:35.251 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - BaseOS 9.1 MB/s | 8.9 MB 00:00 2026-03-08T22:40:39.010 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - AppStream 9.3 MB/s | 27 MB 00:02 2026-03-08T22:40:43.043 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - CRB 11 MB/s | 8.0 MB 00:00 2026-03-08T22:40:44.486 INFO:teuthology.orchestra.run.vm00.stdout:CentOS Stream 9 - Extras packages 59 kB/s | 20 kB 00:00 2026-03-08T22:40:45.459 INFO:teuthology.orchestra.run.vm00.stdout:Extra Packages for Enterprise Linux 23 MB/s | 20 MB 00:00 2026-03-08T22:40:51.195 INFO:teuthology.orchestra.run.vm00.stdout:lab-extras 52 kB/s | 50 kB 00:00 2026-03-08T22:40:52.906 INFO:teuthology.orchestra.run.vm00.stdout:Package librados2-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-08T22:40:52.906 INFO:teuthology.orchestra.run.vm00.stdout:Package librbd1-2:16.2.4-5.el9.x86_64 is already installed. 2026-03-08T22:40:52.911 INFO:teuthology.orchestra.run.vm00.stdout:Package bzip2-1.0.8-11.el9.x86_64 is already installed. 2026-03-08T22:40:52.911 INFO:teuthology.orchestra.run.vm00.stdout:Package perl-Test-Harness-1:3.42-461.el9.noarch is already installed. 2026-03-08T22:40:52.939 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:40:52.943 INFO:teuthology.orchestra.run.vm00.stdout:====================================================================================== 2026-03-08T22:40:52.943 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-08T22:40:52.943 INFO:teuthology.orchestra.run.vm00.stdout:====================================================================================== 2026-03-08T22:40:52.943 INFO:teuthology.orchestra.run.vm00.stdout:Installing: 2026-03-08T22:40:52.943 INFO:teuthology.orchestra.run.vm00.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 6.5 k 2026-03-08T22:40:52.943 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.5 M 2026-03-08T22:40:52.943 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.2 M 2026-03-08T22:40:52.943 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 145 k 2026-03-08T22:40:52.943 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.1 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 150 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 3.8 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 7.4 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 49 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 11 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 50 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 299 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 769 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 34 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 1.0 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 127 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 165 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath noarch 1.0.1-1.el9 appstream 48 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 323 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 303 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 100 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmltodict noarch 0.12.0-15.el9 epel 22 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 85 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.1 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 171 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout:Upgrading: 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.4 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 3.2 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout:Installing dependencies: 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp x86_64 20211102.0-4.el9 epel 551 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options x86_64 1.75.0-13.el9 appstream 104 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 22 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 31 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 2.4 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 253 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 4.7 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 17 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 ceph-noarch 17 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 25 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup x86_64 2.8.1-3.el9 baseos 351 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas x86_64 3.0.4-9.el9 appstream 30 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 appstream 3.0 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 appstream 15 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs x86_64 2.9.1-3.el9 epel 308 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data noarch 1.46.7-10.el9 epel 19 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs x86_64 1.1.0-3.el9 baseos 40 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: libarrow x86_64 9.0.0-15.el9 epel 4.4 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc noarch 9.0.0-15.el9 epel 25 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 163 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: libconfig x86_64 1.7.2-9.el9 baseos 72 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran x86_64 11.5.0-14.el9 baseos 794 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: libnbd x86_64 1.20.3-4.el9 appstream 164 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: liboath x86_64 2.6.12-1.el9 epel 49 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj x86_64 1.12.1-1.el9 appstream 160 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath x86_64 11.5.0-14.el9 baseos 184 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq x86_64 0.11.0-7.el9 appstream 45 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 503 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka x86_64 1.6.1-102.el9 appstream 662 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 5.4 M 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt x86_64 1.10.1-1.el9 appstream 246 k 2026-03-08T22:40:52.944 INFO:teuthology.orchestra.run.vm00.stdout: libunwind x86_64 1.6.2-1.el9 epel 67 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: libxslt x86_64 1.1.34-12.el9 appstream 233 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust x86_64 2.12.0-6.el9 appstream 292 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: lua x86_64 5.4.4-4.el9 appstream 188 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel x86_64 5.4.4-4.el9 crb 22 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: luarocks noarch 3.9.2-5.el9 epel 151 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: mailcap noarch 2.1.49-5.el9 baseos 33 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: openblas x86_64 0.3.29-1.el9 appstream 42 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp x86_64 0.3.29-1.el9 appstream 5.3 M 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs x86_64 9.0.0-15.el9 epel 838 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: pciutils x86_64 3.7.0-7.el9 baseos 93 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: protobuf x86_64 3.14.0-17.el9 appstream 1.0 M 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler x86_64 3.14.0-17.el9 crb 862 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh noarch 2.13.2-5.el9 epel 548 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand noarch 2.2.2-8.el9 epel 29 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel noarch 2.9.1-2.el9 appstream 6.0 M 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 epel 60 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt x86_64 3.2.2-1.el9 epel 43 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools noarch 4.2.4-1.el9 epel 32 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 45 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 ceph 142 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi noarch 2023.05.07-4.el9 epel 14 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi x86_64 1.14.5-5.el9 baseos 253 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot noarch 10.0.1-4.el9 epel 173 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy noarch 18.6.1-2.el9 epel 358 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography x86_64 36.0.1-5.el9 baseos 1.2 M 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel x86_64 3.9.25-3.el9 appstream 244 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth noarch 1:2.45.0-1.el9 epel 254 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio x86_64 1.46.7-10.el9 epel 2.0 M 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 epel 144 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco noarch 8.2.1-3.el9 epel 11 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 epel 18 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 epel 23 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context noarch 6.0.1-3.el9 epel 20 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 epel 19 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text noarch 4.0.0-2.el9 epel 26 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2 noarch 2.11.3-8.el9 appstream 249 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 epel 1.0 M 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 appstream 177 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-logutils noarch 0.3.5-21.el9 epel 46 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako noarch 1.1.4-6.el9 appstream 172 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe x86_64 1.1.1-12.el9 appstream 35 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools noarch 8.12.0-2.el9 epel 79 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort noarch 7.1.1-5.el9 epel 58 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy x86_64 1:1.23.5-2.el9 appstream 6.1 M 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 appstream 442 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging noarch 20.9-5.el9 appstream 77 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan noarch 1.4.2-3.el9 epel 272 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply noarch 3.11-14.el9 baseos 106 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend noarch 3.1.0-2.el9 epel 16 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf noarch 3.14.0-17.el9 appstream 267 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 epel 90 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1 noarch 0.4.8-7.el9 appstream 157 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 appstream 277 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser noarch 2.20-6.el9 baseos 135 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing noarch 2.4.7-9.el9 baseos 150 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru noarch 0.7-16.el9 epel 31 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests noarch 2.25.1-10.el9 baseos 126 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 appstream 54 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes noarch 2.5.1-5.el9 epel 188 k 2026-03-08T22:40:52.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa noarch 4.9-2.el9 epel 59 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy x86_64 1.9.3-2.el9 appstream 19 M 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora noarch 5.0.0-2.el9 epel 36 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml noarch 0.10.2-6.el9 appstream 42 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions noarch 4.15.0-1.el9 epel 86 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3 noarch 1.26.5-7.el9 baseos 218 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob noarch 1.8.8-2.el9 epel 230 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client noarch 1.2.3-2.el9 epel 90 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 epel 427 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile noarch 2.0-10.el9 epel 20 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: qatlib x86_64 25.08.0-2.el9 appstream 240 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs x86_64 1.3.1-1.el9 appstream 66 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: re2 x86_64 1:20211101-20.el9 epel 191 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: socat x86_64 1.7.4.1-8.el9 appstream 303 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: thrift x86_64 0.15.0-4.el9 epel 1.6 M 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: unzip x86_64 6.0-59.el9 baseos 182 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet x86_64 1.6.1-20.el9 appstream 64 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: zip x86_64 3.0-35.el9 baseos 266 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout:Installing weak dependencies: 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service x86_64 25.08.0-2.el9 appstream 37 k 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout:====================================================================================== 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout:Install 135 Packages 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout:Upgrade 2 Packages 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout:Total download size: 210 M 2026-03-08T22:40:52.946 INFO:teuthology.orchestra.run.vm00.stdout:Downloading Packages: 2026-03-08T22:40:54.797 INFO:teuthology.orchestra.run.vm00.stdout:(1/137): ceph-19.2.3-678.ge911bdeb.el9.x86_64.r 12 kB/s | 6.5 kB 00:00 2026-03-08T22:40:55.645 INFO:teuthology.orchestra.run.vm00.stdout:(2/137): ceph-fuse-19.2.3-678.ge911bdeb.el9.x86 1.4 MB/s | 1.2 MB 00:00 2026-03-08T22:40:55.749 INFO:teuthology.orchestra.run.vm00.stdout:(3/137): ceph-base-19.2.3-678.ge911bdeb.el9.x86 3.7 MB/s | 5.5 MB 00:01 2026-03-08T22:40:55.767 INFO:teuthology.orchestra.run.vm00.stdout:(4/137): ceph-immutable-object-cache-19.2.3-678 1.2 MB/s | 145 kB 00:00 2026-03-08T22:40:55.902 INFO:teuthology.orchestra.run.vm00.stdout:(5/137): ceph-mgr-19.2.3-678.ge911bdeb.el9.x86_ 8.0 MB/s | 1.1 MB 00:00 2026-03-08T22:40:55.989 INFO:teuthology.orchestra.run.vm00.stdout:(6/137): ceph-mds-19.2.3-678.ge911bdeb.el9.x86_ 10 MB/s | 2.4 MB 00:00 2026-03-08T22:40:56.400 INFO:teuthology.orchestra.run.vm00.stdout:(7/137): ceph-mon-19.2.3-678.ge911bdeb.el9.x86_ 9.5 MB/s | 4.7 MB 00:00 2026-03-08T22:40:56.984 INFO:teuthology.orchestra.run.vm00.stdout:(8/137): ceph-common-19.2.3-678.ge911bdeb.el9.x 8.0 MB/s | 22 MB 00:02 2026-03-08T22:40:57.104 INFO:teuthology.orchestra.run.vm00.stdout:(9/137): ceph-selinux-19.2.3-678.ge911bdeb.el9. 211 kB/s | 25 kB 00:00 2026-03-08T22:40:57.264 INFO:teuthology.orchestra.run.vm00.stdout:(10/137): ceph-radosgw-19.2.3-678.ge911bdeb.el9 12 MB/s | 11 MB 00:00 2026-03-08T22:40:57.322 INFO:teuthology.orchestra.run.vm00.stdout:(11/137): ceph-osd-19.2.3-678.ge911bdeb.el9.x86 13 MB/s | 17 MB 00:01 2026-03-08T22:40:57.391 INFO:teuthology.orchestra.run.vm00.stdout:(12/137): libcephfs-devel-19.2.3-678.ge911bdeb. 265 kB/s | 34 kB 00:00 2026-03-08T22:40:57.524 INFO:teuthology.orchestra.run.vm00.stdout:(13/137): libcephsqlite-19.2.3-678.ge911bdeb.el 1.2 MB/s | 163 kB 00:00 2026-03-08T22:40:57.553 INFO:teuthology.orchestra.run.vm00.stdout:(14/137): libcephfs2-19.2.3-678.ge911bdeb.el9.x 4.2 MB/s | 1.0 MB 00:00 2026-03-08T22:40:57.646 INFO:teuthology.orchestra.run.vm00.stdout:(15/137): librados-devel-19.2.3-678.ge911bdeb.e 1.0 MB/s | 127 kB 00:00 2026-03-08T22:40:57.671 INFO:teuthology.orchestra.run.vm00.stdout:(16/137): libradosstriper1-19.2.3-678.ge911bdeb 4.2 MB/s | 503 kB 00:00 2026-03-08T22:40:57.789 INFO:teuthology.orchestra.run.vm00.stdout:(17/137): python3-ceph-argparse-19.2.3-678.ge91 381 kB/s | 45 kB 00:00 2026-03-08T22:40:57.905 INFO:teuthology.orchestra.run.vm00.stdout:(18/137): python3-ceph-common-19.2.3-678.ge911b 1.2 MB/s | 142 kB 00:00 2026-03-08T22:40:58.021 INFO:teuthology.orchestra.run.vm00.stdout:(19/137): python3-cephfs-19.2.3-678.ge911bdeb.e 1.4 MB/s | 165 kB 00:00 2026-03-08T22:40:58.039 INFO:teuthology.orchestra.run.vm00.stdout:(20/137): librgw2-19.2.3-678.ge911bdeb.el9.x86_ 14 MB/s | 5.4 MB 00:00 2026-03-08T22:40:58.137 INFO:teuthology.orchestra.run.vm00.stdout:(21/137): python3-rados-19.2.3-678.ge911bdeb.el 2.7 MB/s | 323 kB 00:00 2026-03-08T22:40:58.163 INFO:teuthology.orchestra.run.vm00.stdout:(22/137): python3-rbd-19.2.3-678.ge911bdeb.el9. 2.4 MB/s | 303 kB 00:00 2026-03-08T22:40:58.253 INFO:teuthology.orchestra.run.vm00.stdout:(23/137): python3-rgw-19.2.3-678.ge911bdeb.el9. 859 kB/s | 100 kB 00:00 2026-03-08T22:40:58.285 INFO:teuthology.orchestra.run.vm00.stdout:(24/137): rbd-fuse-19.2.3-678.ge911bdeb.el9.x86 701 kB/s | 85 kB 00:00 2026-03-08T22:40:58.407 INFO:teuthology.orchestra.run.vm00.stdout:(25/137): rbd-nbd-19.2.3-678.ge911bdeb.el9.x86_ 1.4 MB/s | 171 kB 00:00 2026-03-08T22:40:58.528 INFO:teuthology.orchestra.run.vm00.stdout:(26/137): ceph-grafana-dashboards-19.2.3-678.ge 258 kB/s | 31 kB 00:00 2026-03-08T22:40:58.887 INFO:teuthology.orchestra.run.vm00.stdout:(27/137): ceph-mgr-cephadm-19.2.3-678.ge911bdeb 423 kB/s | 150 kB 00:00 2026-03-08T22:40:58.898 INFO:teuthology.orchestra.run.vm00.stdout:(28/137): rbd-mirror-19.2.3-678.ge911bdeb.el9.x 4.8 MB/s | 3.1 MB 00:00 2026-03-08T22:40:59.288 INFO:teuthology.orchestra.run.vm00.stdout:(29/137): ceph-mgr-dashboard-19.2.3-678.ge911bd 9.4 MB/s | 3.8 MB 00:00 2026-03-08T22:40:59.431 INFO:teuthology.orchestra.run.vm00.stdout:(30/137): ceph-mgr-modules-core-19.2.3-678.ge91 1.7 MB/s | 253 kB 00:00 2026-03-08T22:40:59.562 INFO:teuthology.orchestra.run.vm00.stdout:(31/137): ceph-mgr-rook-19.2.3-678.ge911bdeb.el 375 kB/s | 49 kB 00:00 2026-03-08T22:40:59.691 INFO:teuthology.orchestra.run.vm00.stdout:(32/137): ceph-prometheus-alerts-19.2.3-678.ge9 130 kB/s | 17 kB 00:00 2026-03-08T22:40:59.771 INFO:teuthology.orchestra.run.vm00.stdout:(33/137): ceph-mgr-diskprediction-local-19.2.3- 8.5 MB/s | 7.4 MB 00:00 2026-03-08T22:40:59.820 INFO:teuthology.orchestra.run.vm00.stdout:(34/137): ceph-volume-19.2.3-678.ge911bdeb.el9. 2.3 MB/s | 299 kB 00:00 2026-03-08T22:40:59.901 INFO:teuthology.orchestra.run.vm00.stdout:(35/137): cephadm-19.2.3-678.ge911bdeb.el9.noar 5.8 MB/s | 769 kB 00:00 2026-03-08T22:41:00.090 INFO:teuthology.orchestra.run.vm00.stdout:(36/137): ledmon-libs-1.1.0-3.el9.x86_64.rpm 214 kB/s | 40 kB 00:00 2026-03-08T22:41:00.122 INFO:teuthology.orchestra.run.vm00.stdout:(37/137): cryptsetup-2.8.1-3.el9.x86_64.rpm 1.1 MB/s | 351 kB 00:00 2026-03-08T22:41:00.230 INFO:teuthology.orchestra.run.vm00.stdout:(38/137): libconfig-1.7.2-9.el9.x86_64.rpm 516 kB/s | 72 kB 00:00 2026-03-08T22:41:00.488 INFO:teuthology.orchestra.run.vm00.stdout:(39/137): libquadmath-11.5.0-14.el9.x86_64.rpm 715 kB/s | 184 kB 00:00 2026-03-08T22:41:00.669 INFO:teuthology.orchestra.run.vm00.stdout:(40/137): ceph-test-19.2.3-678.ge911bdeb.el9.x8 14 MB/s | 50 MB 00:03 2026-03-08T22:41:00.670 INFO:teuthology.orchestra.run.vm00.stdout:(41/137): mailcap-2.1.49-5.el9.noarch.rpm 183 kB/s | 33 kB 00:00 2026-03-08T22:41:00.675 INFO:teuthology.orchestra.run.vm00.stdout:(42/137): libgfortran-11.5.0-14.el9.x86_64.rpm 1.4 MB/s | 794 kB 00:00 2026-03-08T22:41:00.809 INFO:teuthology.orchestra.run.vm00.stdout:(43/137): python3-cffi-1.14.5-5.el9.x86_64.rpm 1.8 MB/s | 253 kB 00:00 2026-03-08T22:41:00.913 INFO:teuthology.orchestra.run.vm00.stdout:(44/137): pciutils-3.7.0-7.el9.x86_64.rpm 383 kB/s | 93 kB 00:00 2026-03-08T22:41:00.927 INFO:teuthology.orchestra.run.vm00.stdout:(45/137): python3-ply-3.11-14.el9.noarch.rpm 907 kB/s | 106 kB 00:00 2026-03-08T22:41:01.002 INFO:teuthology.orchestra.run.vm00.stdout:(46/137): python3-pycparser-2.20-6.el9.noarch.r 1.5 MB/s | 135 kB 00:00 2026-03-08T22:41:01.020 INFO:teuthology.orchestra.run.vm00.stdout:(47/137): python3-cryptography-36.0.1-5.el9.x86 3.6 MB/s | 1.2 MB 00:00 2026-03-08T22:41:01.030 INFO:teuthology.orchestra.run.vm00.stdout:(48/137): python3-pyparsing-2.4.7-9.el9.noarch. 1.4 MB/s | 150 kB 00:00 2026-03-08T22:41:01.171 INFO:teuthology.orchestra.run.vm00.stdout:(49/137): python3-requests-2.25.1-10.el9.noarch 749 kB/s | 126 kB 00:00 2026-03-08T22:41:01.198 INFO:teuthology.orchestra.run.vm00.stdout:(50/137): python3-urllib3-1.26.5-7.el9.noarch.r 1.2 MB/s | 218 kB 00:00 2026-03-08T22:41:01.228 INFO:teuthology.orchestra.run.vm00.stdout:(51/137): unzip-6.0-59.el9.x86_64.rpm 919 kB/s | 182 kB 00:00 2026-03-08T22:41:01.359 INFO:teuthology.orchestra.run.vm00.stdout:(52/137): zip-3.0-35.el9.x86_64.rpm 1.4 MB/s | 266 kB 00:00 2026-03-08T22:41:01.413 INFO:teuthology.orchestra.run.vm00.stdout:(53/137): flexiblas-3.0.4-9.el9.x86_64.rpm 160 kB/s | 30 kB 00:00 2026-03-08T22:41:01.465 INFO:teuthology.orchestra.run.vm00.stdout:(54/137): flexiblas-openblas-openmp-3.0.4-9.el9 289 kB/s | 15 kB 00:00 2026-03-08T22:41:01.474 INFO:teuthology.orchestra.run.vm00.stdout:(55/137): boost-program-options-1.75.0-13.el9.x 376 kB/s | 104 kB 00:00 2026-03-08T22:41:01.556 INFO:teuthology.orchestra.run.vm00.stdout:(56/137): libpmemobj-1.12.1-1.el9.x86_64.rpm 1.9 MB/s | 160 kB 00:00 2026-03-08T22:41:01.584 INFO:teuthology.orchestra.run.vm00.stdout:(57/137): libnbd-1.20.3-4.el9.x86_64.rpm 1.3 MB/s | 164 kB 00:00 2026-03-08T22:41:01.604 INFO:teuthology.orchestra.run.vm00.stdout:(58/137): librabbitmq-0.11.0-7.el9.x86_64.rpm 937 kB/s | 45 kB 00:00 2026-03-08T22:41:01.665 INFO:teuthology.orchestra.run.vm00.stdout:(59/137): libstoragemgmt-1.10.1-1.el9.x86_64.rp 4.0 MB/s | 246 kB 00:00 2026-03-08T22:41:01.717 INFO:teuthology.orchestra.run.vm00.stdout:(60/137): libxslt-1.1.34-12.el9.x86_64.rpm 4.3 MB/s | 233 kB 00:00 2026-03-08T22:41:01.730 INFO:teuthology.orchestra.run.vm00.stdout:(61/137): librdkafka-1.6.1-102.el9.x86_64.rpm 4.5 MB/s | 662 kB 00:00 2026-03-08T22:41:01.754 INFO:teuthology.orchestra.run.vm00.stdout:(62/137): flexiblas-netlib-3.0.4-9.el9.x86_64.r 7.6 MB/s | 3.0 MB 00:00 2026-03-08T22:41:01.792 INFO:teuthology.orchestra.run.vm00.stdout:(63/137): lua-5.4.4-4.el9.x86_64.rpm 3.0 MB/s | 188 kB 00:00 2026-03-08T22:41:01.793 INFO:teuthology.orchestra.run.vm00.stdout:(64/137): lttng-ust-2.12.0-6.el9.x86_64.rpm 3.8 MB/s | 292 kB 00:00 2026-03-08T22:41:01.804 INFO:teuthology.orchestra.run.vm00.stdout:(65/137): openblas-0.3.29-1.el9.x86_64.rpm 845 kB/s | 42 kB 00:00 2026-03-08T22:41:01.908 INFO:teuthology.orchestra.run.vm00.stdout:(66/137): protobuf-3.14.0-17.el9.x86_64.rpm 8.8 MB/s | 1.0 MB 00:00 2026-03-08T22:41:01.972 INFO:teuthology.orchestra.run.vm00.stdout:(67/137): python3-devel-3.9.25-3.el9.x86_64.rpm 3.7 MB/s | 244 kB 00:00 2026-03-08T22:41:02.032 INFO:teuthology.orchestra.run.vm00.stdout:(68/137): python3-jinja2-2.11.3-8.el9.noarch.rp 4.0 MB/s | 249 kB 00:00 2026-03-08T22:41:02.088 INFO:teuthology.orchestra.run.vm00.stdout:(69/137): python3-jmespath-1.0.1-1.el9.noarch.r 861 kB/s | 48 kB 00:00 2026-03-08T22:41:02.174 INFO:teuthology.orchestra.run.vm00.stdout:(70/137): python3-libstoragemgmt-1.10.1-1.el9.x 2.0 MB/s | 177 kB 00:00 2026-03-08T22:41:02.270 INFO:teuthology.orchestra.run.vm00.stdout:(71/137): python3-babel-2.9.1-2.el9.noarch.rpm 13 MB/s | 6.0 MB 00:00 2026-03-08T22:41:02.277 INFO:teuthology.orchestra.run.vm00.stdout:(72/137): python3-mako-1.1.4-6.el9.noarch.rpm 1.6 MB/s | 172 kB 00:00 2026-03-08T22:41:02.319 INFO:teuthology.orchestra.run.vm00.stdout:(73/137): openblas-openmp-0.3.29-1.el9.x86_64.r 10 MB/s | 5.3 MB 00:00 2026-03-08T22:41:02.319 INFO:teuthology.orchestra.run.vm00.stdout:(74/137): python3-markupsafe-1.1.1-12.el9.x86_6 705 kB/s | 35 kB 00:00 2026-03-08T22:41:02.380 INFO:teuthology.orchestra.run.vm00.stdout:(75/137): python3-packaging-20.9-5.el9.noarch.r 1.3 MB/s | 77 kB 00:00 2026-03-08T22:41:02.402 INFO:teuthology.orchestra.run.vm00.stdout:(76/137): python3-numpy-f2py-1.23.5-2.el9.x86_6 5.2 MB/s | 442 kB 00:00 2026-03-08T22:41:02.451 INFO:teuthology.orchestra.run.vm00.stdout:(77/137): python3-protobuf-3.14.0-17.el9.noarch 3.7 MB/s | 267 kB 00:00 2026-03-08T22:41:02.468 INFO:teuthology.orchestra.run.vm00.stdout:(78/137): python3-pyasn1-0.4.8-7.el9.noarch.rpm 2.3 MB/s | 157 kB 00:00 2026-03-08T22:41:02.510 INFO:teuthology.orchestra.run.vm00.stdout:(79/137): python3-pyasn1-modules-0.4.8-7.el9.no 4.6 MB/s | 277 kB 00:00 2026-03-08T22:41:02.585 INFO:teuthology.orchestra.run.vm00.stdout:(80/137): python3-numpy-1.23.5-2.el9.x86_64.rpm 20 MB/s | 6.1 MB 00:00 2026-03-08T22:41:02.609 INFO:teuthology.orchestra.run.vm00.stdout:(81/137): python3-requests-oauthlib-1.3.0-12.el 382 kB/s | 54 kB 00:00 2026-03-08T22:41:02.653 INFO:teuthology.orchestra.run.vm00.stdout:(82/137): python3-toml-0.10.2-6.el9.noarch.rpm 616 kB/s | 42 kB 00:00 2026-03-08T22:41:02.658 INFO:teuthology.orchestra.run.vm00.stdout:(83/137): qatlib-25.08.0-2.el9.x86_64.rpm 4.8 MB/s | 240 kB 00:00 2026-03-08T22:41:02.708 INFO:teuthology.orchestra.run.vm00.stdout:(84/137): qatlib-service-25.08.0-2.el9.x86_64.r 673 kB/s | 37 kB 00:00 2026-03-08T22:41:02.720 INFO:teuthology.orchestra.run.vm00.stdout:(85/137): qatzip-libs-1.3.1-1.el9.x86_64.rpm 1.0 MB/s | 66 kB 00:00 2026-03-08T22:41:02.760 INFO:teuthology.orchestra.run.vm00.stdout:(86/137): socat-1.7.4.1-8.el9.x86_64.rpm 5.7 MB/s | 303 kB 00:00 2026-03-08T22:41:02.771 INFO:teuthology.orchestra.run.vm00.stdout:(87/137): xmlstarlet-1.6.1-20.el9.x86_64.rpm 1.2 MB/s | 64 kB 00:00 2026-03-08T22:41:02.896 INFO:teuthology.orchestra.run.vm00.stdout:(88/137): lua-devel-5.4.4-4.el9.x86_64.rpm 164 kB/s | 22 kB 00:00 2026-03-08T22:41:02.915 INFO:teuthology.orchestra.run.vm00.stdout:(89/137): abseil-cpp-20211102.0-4.el9.x86_64.rp 30 MB/s | 551 kB 00:00 2026-03-08T22:41:02.937 INFO:teuthology.orchestra.run.vm00.stdout:(90/137): gperftools-libs-2.9.1-3.el9.x86_64.rp 13 MB/s | 308 kB 00:00 2026-03-08T22:41:02.979 INFO:teuthology.orchestra.run.vm00.stdout:(91/137): grpc-data-1.46.7-10.el9.noarch.rpm 475 kB/s | 19 kB 00:00 2026-03-08T22:41:03.084 INFO:teuthology.orchestra.run.vm00.stdout:(92/137): libarrow-9.0.0-15.el9.x86_64.rpm 42 MB/s | 4.4 MB 00:00 2026-03-08T22:41:03.087 INFO:teuthology.orchestra.run.vm00.stdout:(93/137): libarrow-doc-9.0.0-15.el9.noarch.rpm 9.8 MB/s | 25 kB 00:00 2026-03-08T22:41:03.091 INFO:teuthology.orchestra.run.vm00.stdout:(94/137): liboath-2.6.12-1.el9.x86_64.rpm 15 MB/s | 49 kB 00:00 2026-03-08T22:41:03.094 INFO:teuthology.orchestra.run.vm00.stdout:(95/137): libunwind-1.6.2-1.el9.x86_64.rpm 20 MB/s | 67 kB 00:00 2026-03-08T22:41:03.099 INFO:teuthology.orchestra.run.vm00.stdout:(96/137): luarocks-3.9.2-5.el9.noarch.rpm 34 MB/s | 151 kB 00:00 2026-03-08T22:41:03.116 INFO:teuthology.orchestra.run.vm00.stdout:(97/137): parquet-libs-9.0.0-15.el9.x86_64.rpm 48 MB/s | 838 kB 00:00 2026-03-08T22:41:03.128 INFO:teuthology.orchestra.run.vm00.stdout:(98/137): python3-asyncssh-2.13.2-5.el9.noarch. 46 MB/s | 548 kB 00:00 2026-03-08T22:41:03.131 INFO:teuthology.orchestra.run.vm00.stdout:(99/137): python3-autocommand-2.2.2-8.el9.noarc 12 MB/s | 29 kB 00:00 2026-03-08T22:41:03.135 INFO:teuthology.orchestra.run.vm00.stdout:(100/137): python3-backports-tarfile-1.2.0-1.el 21 MB/s | 60 kB 00:00 2026-03-08T22:41:03.137 INFO:teuthology.orchestra.run.vm00.stdout:(101/137): python3-bcrypt-3.2.2-1.el9.x86_64.rp 17 MB/s | 43 kB 00:00 2026-03-08T22:41:03.140 INFO:teuthology.orchestra.run.vm00.stdout:(102/137): python3-cachetools-4.2.4-1.el9.noarc 13 MB/s | 32 kB 00:00 2026-03-08T22:41:03.143 INFO:teuthology.orchestra.run.vm00.stdout:(103/137): python3-certifi-2023.05.07-4.el9.noa 4.8 MB/s | 14 kB 00:00 2026-03-08T22:41:03.148 INFO:teuthology.orchestra.run.vm00.stdout:(104/137): python3-cheroot-10.0.1-4.el9.noarch. 37 MB/s | 173 kB 00:00 2026-03-08T22:41:03.157 INFO:teuthology.orchestra.run.vm00.stdout:(105/137): python3-cherrypy-18.6.1-2.el9.noarch 44 MB/s | 358 kB 00:00 2026-03-08T22:41:03.163 INFO:teuthology.orchestra.run.vm00.stdout:(106/137): python3-google-auth-2.45.0-1.el9.noa 42 MB/s | 254 kB 00:00 2026-03-08T22:41:03.204 INFO:teuthology.orchestra.run.vm00.stdout:(107/137): python3-grpcio-1.46.7-10.el9.x86_64. 50 MB/s | 2.0 MB 00:00 2026-03-08T22:41:03.208 INFO:teuthology.orchestra.run.vm00.stdout:(108/137): python3-grpcio-tools-1.46.7-10.el9.x 35 MB/s | 144 kB 00:00 2026-03-08T22:41:03.211 INFO:teuthology.orchestra.run.vm00.stdout:(109/137): python3-jaraco-8.2.1-3.el9.noarch.rp 4.9 MB/s | 11 kB 00:00 2026-03-08T22:41:03.214 INFO:teuthology.orchestra.run.vm00.stdout:(110/137): python3-jaraco-classes-3.2.1-5.el9.n 6.0 MB/s | 18 kB 00:00 2026-03-08T22:41:03.216 INFO:teuthology.orchestra.run.vm00.stdout:(111/137): python3-jaraco-collections-3.0.0-8.e 9.6 MB/s | 23 kB 00:00 2026-03-08T22:41:03.219 INFO:teuthology.orchestra.run.vm00.stdout:(112/137): python3-jaraco-context-6.0.1-3.el9.n 8.5 MB/s | 20 kB 00:00 2026-03-08T22:41:03.221 INFO:teuthology.orchestra.run.vm00.stdout:(113/137): python3-jaraco-functools-3.5.0-2.el9 9.1 MB/s | 19 kB 00:00 2026-03-08T22:41:03.224 INFO:teuthology.orchestra.run.vm00.stdout:(114/137): python3-jaraco-text-4.0.0-2.el9.noar 9.8 MB/s | 26 kB 00:00 2026-03-08T22:41:03.243 INFO:teuthology.orchestra.run.vm00.stdout:(115/137): python3-kubernetes-26.1.0-3.el9.noar 55 MB/s | 1.0 MB 00:00 2026-03-08T22:41:03.245 INFO:teuthology.orchestra.run.vm00.stdout:(116/137): python3-logutils-0.3.5-21.el9.noarch 18 MB/s | 46 kB 00:00 2026-03-08T22:41:03.249 INFO:teuthology.orchestra.run.vm00.stdout:(117/137): protobuf-compiler-3.14.0-17.el9.x86_ 1.8 MB/s | 862 kB 00:00 2026-03-08T22:41:03.251 INFO:teuthology.orchestra.run.vm00.stdout:(118/137): python3-more-itertools-8.12.0-2.el9. 15 MB/s | 79 kB 00:00 2026-03-08T22:41:03.257 INFO:teuthology.orchestra.run.vm00.stdout:(119/137): python3-pecan-1.4.2-3.el9.noarch.rpm 49 MB/s | 272 kB 00:00 2026-03-08T22:41:03.257 INFO:teuthology.orchestra.run.vm00.stdout:(120/137): python3-natsort-7.1.1-5.el9.noarch.r 6.6 MB/s | 58 kB 00:00 2026-03-08T22:41:03.259 INFO:teuthology.orchestra.run.vm00.stdout:(121/137): python3-portend-3.1.0-2.el9.noarch.r 7.4 MB/s | 16 kB 00:00 2026-03-08T22:41:03.261 INFO:teuthology.orchestra.run.vm00.stdout:(122/137): python3-pyOpenSSL-21.0.0-1.el9.noarc 25 MB/s | 90 kB 00:00 2026-03-08T22:41:03.263 INFO:teuthology.orchestra.run.vm00.stdout:(123/137): python3-repoze-lru-0.7-16.el9.noarch 7.8 MB/s | 31 kB 00:00 2026-03-08T22:41:03.265 INFO:teuthology.orchestra.run.vm00.stdout:(124/137): python3-routes-2.5.1-5.el9.noarch.rp 45 MB/s | 188 kB 00:00 2026-03-08T22:41:03.268 INFO:teuthology.orchestra.run.vm00.stdout:(125/137): python3-rsa-4.9-2.el9.noarch.rpm 13 MB/s | 59 kB 00:00 2026-03-08T22:41:03.268 INFO:teuthology.orchestra.run.vm00.stdout:(126/137): python3-tempora-5.0.0-2.el9.noarch.r 13 MB/s | 36 kB 00:00 2026-03-08T22:41:03.271 INFO:teuthology.orchestra.run.vm00.stdout:(127/137): python3-typing-extensions-4.15.0-1.e 27 MB/s | 86 kB 00:00 2026-03-08T22:41:03.273 INFO:teuthology.orchestra.run.vm00.stdout:(128/137): python3-webob-1.8.8-2.el9.noarch.rpm 46 MB/s | 230 kB 00:00 2026-03-08T22:41:03.274 INFO:teuthology.orchestra.run.vm00.stdout:(129/137): python3-websocket-client-1.2.3-2.el9 29 MB/s | 90 kB 00:00 2026-03-08T22:41:03.279 INFO:teuthology.orchestra.run.vm00.stdout:(130/137): python3-xmltodict-0.12.0-15.el9.noar 4.9 MB/s | 22 kB 00:00 2026-03-08T22:41:03.281 INFO:teuthology.orchestra.run.vm00.stdout:(131/137): python3-werkzeug-2.0.3-3.el9.1.noarc 53 MB/s | 427 kB 00:00 2026-03-08T22:41:03.282 INFO:teuthology.orchestra.run.vm00.stdout:(132/137): python3-zc-lockfile-2.0-10.el9.noarc 5.3 MB/s | 20 kB 00:00 2026-03-08T22:41:03.286 INFO:teuthology.orchestra.run.vm00.stdout:(133/137): re2-20211101-20.el9.x86_64.rpm 39 MB/s | 191 kB 00:00 2026-03-08T22:41:03.314 INFO:teuthology.orchestra.run.vm00.stdout:(134/137): thrift-0.15.0-4.el9.x86_64.rpm 51 MB/s | 1.6 MB 00:00 2026-03-08T22:41:03.646 INFO:teuthology.orchestra.run.vm00.stdout:(135/137): python3-scipy-1.9.3-2.el9.x86_64.rpm 17 MB/s | 19 MB 00:01 2026-03-08T22:41:04.281 INFO:teuthology.orchestra.run.vm00.stdout:(136/137): librbd1-19.2.3-678.ge911bdeb.el9.x86 3.3 MB/s | 3.2 MB 00:00 2026-03-08T22:41:04.330 INFO:teuthology.orchestra.run.vm00.stdout:(137/137): librados2-19.2.3-678.ge911bdeb.el9.x 3.3 MB/s | 3.4 MB 00:01 2026-03-08T22:41:04.333 INFO:teuthology.orchestra.run.vm00.stdout:-------------------------------------------------------------------------------- 2026-03-08T22:41:04.333 INFO:teuthology.orchestra.run.vm00.stdout:Total 18 MB/s | 210 MB 00:11 2026-03-08T22:41:05.007 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-08T22:41:05.063 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-08T22:41:05.063 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-08T22:41:05.961 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-08T22:41:05.961 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-08T22:41:07.064 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-08T22:41:07.080 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-more-itertools-8.12.0-2.el9.noarch 1/139 2026-03-08T22:41:07.095 INFO:teuthology.orchestra.run.vm00.stdout: Installing : thrift-0.15.0-4.el9.x86_64 2/139 2026-03-08T22:41:07.294 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lttng-ust-2.12.0-6.el9.x86_64 3/139 2026-03-08T22:41:07.296 INFO:teuthology.orchestra.run.vm00.stdout: Upgrading : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/139 2026-03-08T22:41:07.367 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/139 2026-03-08T22:41:07.406 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/139 2026-03-08T22:41:07.549 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 5/139 2026-03-08T22:41:07.560 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 6/139 2026-03-08T22:41:07.567 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librdkafka-1.6.1-102.el9.x86_64 7/139 2026-03-08T22:41:07.573 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librabbitmq-0.11.0-7.el9.x86_64 8/139 2026-03-08T22:41:07.596 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-8.2.1-3.el9.noarch 9/139 2026-03-08T22:41:07.612 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libnbd-1.20.3-4.el9.x86_64 10/139 2026-03-08T22:41:07.624 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/139 2026-03-08T22:41:07.667 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 11/139 2026-03-08T22:41:07.674 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/139 2026-03-08T22:41:07.691 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 12/139 2026-03-08T22:41:07.731 INFO:teuthology.orchestra.run.vm00.stdout: Installing : re2-1:20211101-20.el9.x86_64 13/139 2026-03-08T22:41:07.772 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libarrow-9.0.0-15.el9.x86_64 14/139 2026-03-08T22:41:07.801 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-werkzeug-2.0.3-3.el9.1.noarch 15/139 2026-03-08T22:41:07.835 INFO:teuthology.orchestra.run.vm00.stdout: Installing : liboath-2.6.12-1.el9.x86_64 16/139 2026-03-08T22:41:07.852 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyasn1-0.4.8-7.el9.noarch 17/139 2026-03-08T22:41:07.866 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-markupsafe-1.1.1-12.el9.x86_64 18/139 2026-03-08T22:41:07.880 INFO:teuthology.orchestra.run.vm00.stdout: Installing : protobuf-3.14.0-17.el9.x86_64 19/139 2026-03-08T22:41:07.888 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lua-5.4.4-4.el9.x86_64 20/139 2026-03-08T22:41:07.895 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-3.0.4-9.el9.x86_64 21/139 2026-03-08T22:41:07.928 INFO:teuthology.orchestra.run.vm00.stdout: Installing : unzip-6.0-59.el9.x86_64 22/139 2026-03-08T22:41:07.952 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-urllib3-1.26.5-7.el9.noarch 23/139 2026-03-08T22:41:07.960 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-requests-2.25.1-10.el9.noarch 24/139 2026-03-08T22:41:07.974 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libquadmath-11.5.0-14.el9.x86_64 25/139 2026-03-08T22:41:07.978 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libgfortran-11.5.0-14.el9.x86_64 26/139 2026-03-08T22:41:08.013 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ledmon-libs-1.1.0-3.el9.x86_64 27/139 2026-03-08T22:41:08.027 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 28/139 2026-03-08T22:41:08.043 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 29/139 2026-03-08T22:41:08.062 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 30/139 2026-03-08T22:41:08.072 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-requests-oauthlib-1.3.0-12.el9.noarch 31/139 2026-03-08T22:41:08.109 INFO:teuthology.orchestra.run.vm00.stdout: Installing : zip-3.0-35.el9.x86_64 32/139 2026-03-08T22:41:08.117 INFO:teuthology.orchestra.run.vm00.stdout: Installing : luarocks-3.9.2-5.el9.noarch 33/139 2026-03-08T22:41:08.133 INFO:teuthology.orchestra.run.vm00.stdout: Installing : lua-devel-5.4.4-4.el9.x86_64 34/139 2026-03-08T22:41:08.250 INFO:teuthology.orchestra.run.vm00.stdout: Installing : protobuf-compiler-3.14.0-17.el9.x86_64 35/139 2026-03-08T22:41:08.536 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-mako-1.1.4-6.el9.noarch 36/139 2026-03-08T22:41:08.597 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyasn1-modules-0.4.8-7.el9.noarch 37/139 2026-03-08T22:41:08.642 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rsa-4.9-2.el9.noarch 38/139 2026-03-08T22:41:08.687 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-classes-3.2.1-5.el9.noarch 39/139 2026-03-08T22:41:08.727 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 40/139 2026-03-08T22:41:08.761 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-zc-lockfile-2.0-10.el9.noarch 41/139 2026-03-08T22:41:08.787 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-xmltodict-0.12.0-15.el9.noarch 42/139 2026-03-08T22:41:08.817 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-websocket-client-1.2.3-2.el9.noarch 43/139 2026-03-08T22:41:08.825 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-webob-1.8.8-2.el9.noarch 44/139 2026-03-08T22:41:08.834 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-typing-extensions-4.15.0-1.el9.noarch 45/139 2026-03-08T22:41:08.854 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-repoze-lru-0.7-16.el9.noarch 46/139 2026-03-08T22:41:08.871 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-routes-2.5.1-5.el9.noarch 47/139 2026-03-08T22:41:08.888 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-natsort-7.1.1-5.el9.noarch 48/139 2026-03-08T22:41:08.953 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-logutils-0.3.5-21.el9.noarch 49/139 2026-03-08T22:41:08.968 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pecan-1.4.2-3.el9.noarch 50/139 2026-03-08T22:41:08.984 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-certifi-2023.05.07-4.el9.noarch 51/139 2026-03-08T22:41:09.039 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cachetools-4.2.4-1.el9.noarch 52/139 2026-03-08T22:41:09.459 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-google-auth-1:2.45.0-1.el9.noarch 53/139 2026-03-08T22:41:09.481 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-kubernetes-1:26.1.0-3.el9.noarch 54/139 2026-03-08T22:41:09.486 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-backports-tarfile-1.2.0-1.el9.noarch 55/139 2026-03-08T22:41:09.495 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-context-6.0.1-3.el9.noarch 56/139 2026-03-08T22:41:09.501 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-autocommand-2.2.2-8.el9.noarch 57/139 2026-03-08T22:41:09.508 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libunwind-1.6.2-1.el9.x86_64 58/139 2026-03-08T22:41:09.513 INFO:teuthology.orchestra.run.vm00.stdout: Installing : gperftools-libs-2.9.1-3.el9.x86_64 59/139 2026-03-08T22:41:09.516 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libarrow-doc-9.0.0-15.el9.noarch 60/139 2026-03-08T22:41:09.549 INFO:teuthology.orchestra.run.vm00.stdout: Installing : grpc-data-1.46.7-10.el9.noarch 61/139 2026-03-08T22:41:09.607 INFO:teuthology.orchestra.run.vm00.stdout: Installing : abseil-cpp-20211102.0-4.el9.x86_64 62/139 2026-03-08T22:41:09.625 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-grpcio-1.46.7-10.el9.x86_64 63/139 2026-03-08T22:41:09.635 INFO:teuthology.orchestra.run.vm00.stdout: Installing : socat-1.7.4.1-8.el9.x86_64 64/139 2026-03-08T22:41:09.683 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-toml-0.10.2-6.el9.noarch 65/139 2026-03-08T22:41:09.690 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-functools-3.5.0-2.el9.noarch 66/139 2026-03-08T22:41:09.696 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-text-4.0.0-2.el9.noarch 67/139 2026-03-08T22:41:09.708 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jaraco-collections-3.0.0-8.el9.noarch 68/139 2026-03-08T22:41:09.715 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-tempora-5.0.0-2.el9.noarch 69/139 2026-03-08T22:41:09.752 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-portend-3.1.0-2.el9.noarch 70/139 2026-03-08T22:41:09.767 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-protobuf-3.14.0-17.el9.noarch 71/139 2026-03-08T22:41:09.816 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-grpcio-tools-1.46.7-10.el9.x86_64 72/139 2026-03-08T22:41:10.122 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-devel-3.9.25-3.el9.x86_64 73/139 2026-03-08T22:41:10.155 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-babel-2.9.1-2.el9.noarch 74/139 2026-03-08T22:41:10.161 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jinja2-2.11.3-8.el9.noarch 75/139 2026-03-08T22:41:10.227 INFO:teuthology.orchestra.run.vm00.stdout: Installing : openblas-0.3.29-1.el9.x86_64 76/139 2026-03-08T22:41:10.229 INFO:teuthology.orchestra.run.vm00.stdout: Installing : openblas-openmp-0.3.29-1.el9.x86_64 77/139 2026-03-08T22:41:10.254 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 78/139 2026-03-08T22:41:10.691 INFO:teuthology.orchestra.run.vm00.stdout: Installing : flexiblas-netlib-3.0.4-9.el9.x86_64 79/139 2026-03-08T22:41:10.802 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-numpy-1:1.23.5-2.el9.x86_64 80/139 2026-03-08T22:41:11.615 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 81/139 2026-03-08T22:41:11.690 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-scipy-1.9.3-2.el9.x86_64 82/139 2026-03-08T22:41:11.724 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libxslt-1.1.34-12.el9.x86_64 83/139 2026-03-08T22:41:11.728 INFO:teuthology.orchestra.run.vm00.stdout: Installing : xmlstarlet-1.6.1-20.el9.x86_64 84/139 2026-03-08T22:41:11.891 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libpmemobj-1.12.1-1.el9.x86_64 85/139 2026-03-08T22:41:11.894 INFO:teuthology.orchestra.run.vm00.stdout: Upgrading : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 86/139 2026-03-08T22:41:11.928 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 86/139 2026-03-08T22:41:11.932 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 87/139 2026-03-08T22:41:11.983 INFO:teuthology.orchestra.run.vm00.stdout: Installing : boost-program-options-1.75.0-13.el9.x86_64 88/139 2026-03-08T22:41:12.248 INFO:teuthology.orchestra.run.vm00.stdout: Installing : parquet-libs-9.0.0-15.el9.x86_64 89/139 2026-03-08T22:41:12.250 INFO:teuthology.orchestra.run.vm00.stdout: Installing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 90/139 2026-03-08T22:41:12.271 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 90/139 2026-03-08T22:41:12.273 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 91/139 2026-03-08T22:41:13.511 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 92/139 2026-03-08T22:41:13.564 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 92/139 2026-03-08T22:41:13.587 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 92/139 2026-03-08T22:41:13.602 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyparsing-2.4.7-9.el9.noarch 93/139 2026-03-08T22:41:13.611 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-packaging-20.9-5.el9.noarch 94/139 2026-03-08T22:41:13.633 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-ply-3.11-14.el9.noarch 95/139 2026-03-08T22:41:13.656 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pycparser-2.20-6.el9.noarch 96/139 2026-03-08T22:41:13.758 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cffi-1.14.5-5.el9.x86_64 97/139 2026-03-08T22:41:13.772 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cryptography-36.0.1-5.el9.x86_64 98/139 2026-03-08T22:41:13.805 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-pyOpenSSL-21.0.0-1.el9.noarch 99/139 2026-03-08T22:41:13.856 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cheroot-10.0.1-4.el9.noarch 100/139 2026-03-08T22:41:13.936 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-cherrypy-18.6.1-2.el9.noarch 101/139 2026-03-08T22:41:13.948 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-asyncssh-2.13.2-5.el9.noarch 102/139 2026-03-08T22:41:13.953 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-bcrypt-3.2.2-1.el9.x86_64 103/139 2026-03-08T22:41:13.960 INFO:teuthology.orchestra.run.vm00.stdout: Installing : pciutils-3.7.0-7.el9.x86_64 104/139 2026-03-08T22:41:13.965 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatlib-25.08.0-2.el9.x86_64 105/139 2026-03-08T22:41:13.967 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatlib-service-25.08.0-2.el9.x86_64 106/139 2026-03-08T22:41:13.986 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 106/139 2026-03-08T22:41:14.312 INFO:teuthology.orchestra.run.vm00.stdout: Installing : qatzip-libs-1.3.1-1.el9.x86_64 107/139 2026-03-08T22:41:14.318 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 108/139 2026-03-08T22:41:14.360 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 108/139 2026-03-08T22:41:14.360 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /usr/lib/systemd/system/ceph.target. 2026-03-08T22:41:14.360 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /usr/lib/systemd/system/ceph-crash.service. 2026-03-08T22:41:14.360 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:14.365 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 109/139 2026-03-08T22:41:21.751 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 109/139 2026-03-08T22:41:21.751 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /sys 2026-03-08T22:41:21.751 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /proc 2026-03-08T22:41:21.751 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /mnt 2026-03-08T22:41:21.751 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /var/tmp 2026-03-08T22:41:21.751 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /home 2026-03-08T22:41:21.751 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /root 2026-03-08T22:41:21.751 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /tmp 2026-03-08T22:41:21.751 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:21.891 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 110/139 2026-03-08T22:41:21.920 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 110/139 2026-03-08T22:41:21.920 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:41:21.920 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-08T22:41:21.920 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-08T22:41:21.920 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /usr/lib/systemd/system/ceph-mds.target. 2026-03-08T22:41:21.920 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:22.184 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 111/139 2026-03-08T22:41:22.208 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 111/139 2026-03-08T22:41:22.208 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:41:22.208 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-08T22:41:22.208 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-08T22:41:22.208 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /usr/lib/systemd/system/ceph-mon.target. 2026-03-08T22:41:22.208 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:22.216 INFO:teuthology.orchestra.run.vm00.stdout: Installing : mailcap-2.1.49-5.el9.noarch 112/139 2026-03-08T22:41:22.218 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libconfig-1.7.2-9.el9.x86_64 113/139 2026-03-08T22:41:22.238 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 114/139 2026-03-08T22:41:22.238 INFO:teuthology.orchestra.run.vm00.stdout:Creating group 'qat' with GID 994. 2026-03-08T22:41:22.238 INFO:teuthology.orchestra.run.vm00.stdout:Creating group 'libstoragemgmt' with GID 993. 2026-03-08T22:41:22.238 INFO:teuthology.orchestra.run.vm00.stdout:Creating user 'libstoragemgmt' (daemon account for libstoragemgmt) with UID 993 and GID 993. 2026-03-08T22:41:22.238 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:22.250 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libstoragemgmt-1.10.1-1.el9.x86_64 114/139 2026-03-08T22:41:22.281 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 114/139 2026-03-08T22:41:22.281 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/libstoragemgmt.service → /usr/lib/systemd/system/libstoragemgmt.service. 2026-03-08T22:41:22.281 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:22.329 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 115/139 2026-03-08T22:41:22.417 INFO:teuthology.orchestra.run.vm00.stdout: Installing : cryptsetup-2.8.1-3.el9.x86_64 116/139 2026-03-08T22:41:22.421 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 117/139 2026-03-08T22:41:22.436 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 117/139 2026-03-08T22:41:22.437 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:41:22.437 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-08T22:41:22.437 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:23.283 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 118/139 2026-03-08T22:41:23.314 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 118/139 2026-03-08T22:41:23.314 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:41:23.314 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-08T22:41:23.314 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-08T22:41:23.314 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /usr/lib/systemd/system/ceph-osd.target. 2026-03-08T22:41:23.314 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:23.392 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 119/139 2026-03-08T22:41:23.395 INFO:teuthology.orchestra.run.vm00.stdout: Installing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 119/139 2026-03-08T22:41:23.402 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 120/139 2026-03-08T22:41:23.429 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 121/139 2026-03-08T22:41:23.432 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 122/139 2026-03-08T22:41:24.052 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 122/139 2026-03-08T22:41:24.097 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 123/139 2026-03-08T22:41:24.665 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 123/139 2026-03-08T22:41:24.734 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 124/139 2026-03-08T22:41:24.804 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 124/139 2026-03-08T22:41:24.885 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 125/139 2026-03-08T22:41:24.888 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 126/139 2026-03-08T22:41:24.909 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 126/139 2026-03-08T22:41:24.909 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:41:24.909 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-08T22:41:24.909 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-08T22:41:24.909 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /usr/lib/systemd/system/ceph-mgr.target. 2026-03-08T22:41:24.910 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:24.924 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 127/139 2026-03-08T22:41:24.934 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 127/139 2026-03-08T22:41:25.470 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 128/139 2026-03-08T22:41:25.474 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 129/139 2026-03-08T22:41:25.497 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 129/139 2026-03-08T22:41:25.497 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:41:25.497 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-08T22:41:25.497 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-08T22:41:25.497 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /usr/lib/systemd/system/ceph-radosgw.target. 2026-03-08T22:41:25.497 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:25.511 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 130/139 2026-03-08T22:41:25.534 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 130/139 2026-03-08T22:41:25.534 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:41:25.534 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-08T22:41:25.534 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:25.726 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 131/139 2026-03-08T22:41:25.748 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 131/139 2026-03-08T22:41:25.748 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:41:25.748 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-08T22:41:25.748 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-08T22:41:25.748 INFO:teuthology.orchestra.run.vm00.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target → /usr/lib/systemd/system/ceph-rbd-mirror.target. 2026-03-08T22:41:25.748 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:28.578 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 132/139 2026-03-08T22:41:28.590 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 133/139 2026-03-08T22:41:28.594 INFO:teuthology.orchestra.run.vm00.stdout: Installing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 134/139 2026-03-08T22:41:28.655 INFO:teuthology.orchestra.run.vm00.stdout: Installing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 135/139 2026-03-08T22:41:28.667 INFO:teuthology.orchestra.run.vm00.stdout: Installing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 136/139 2026-03-08T22:41:28.671 INFO:teuthology.orchestra.run.vm00.stdout: Installing : python3-jmespath-1.0.1-1.el9.noarch 137/139 2026-03-08T22:41:28.671 INFO:teuthology.orchestra.run.vm00.stdout: Cleanup : librbd1-2:16.2.4-5.el9.x86_64 138/139 2026-03-08T22:41:28.689 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librbd1-2:16.2.4-5.el9.x86_64 138/139 2026-03-08T22:41:28.689 INFO:teuthology.orchestra.run.vm00.stdout: Cleanup : librados2-2:16.2.4-5.el9.x86_64 139/139 2026-03-08T22:41:30.690 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados2-2:16.2.4-5.el9.x86_64 139/139 2026-03-08T22:41:30.690 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/139 2026-03-08T22:41:30.690 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/139 2026-03-08T22:41:30.690 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/139 2026-03-08T22:41:30.691 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 4/139 2026-03-08T22:41:30.691 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/139 2026-03-08T22:41:30.691 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 6/139 2026-03-08T22:41:30.691 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 7/139 2026-03-08T22:41:30.691 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 9/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 10/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 11/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 12/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_6 13/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 14/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 15/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 16/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 17/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 18/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9 19/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 20/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 21/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 22/139 2026-03-08T22:41:30.692 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 23/139 2026-03-08T22:41:30.693 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 24/139 2026-03-08T22:41:30.693 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 25/139 2026-03-08T22:41:30.693 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 26/139 2026-03-08T22:41:30.693 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 27/139 2026-03-08T22:41:30.693 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 28/139 2026-03-08T22:41:30.693 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 29/139 2026-03-08T22:41:30.693 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 30/139 2026-03-08T22:41:30.693 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 31/139 2026-03-08T22:41:30.693 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 32/139 2026-03-08T22:41:30.693 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 33/139 2026-03-08T22:41:30.696 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 34/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 35/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 36/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 37/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 38/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 39/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 40/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 41/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 42/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 43/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 45/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ply-3.11-14.el9.noarch 46/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 47/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 48/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 49/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 50/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : unzip-6.0-59.el9.x86_64 51/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : zip-3.0-35.el9.x86_64 52/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 53/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 54/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 55/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 56/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 57/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 58/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 59/139 2026-03-08T22:41:30.703 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 60/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 61/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 62/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 63/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-5.4.4-4.el9.x86_64 64/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 65/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 66/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 67/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 68/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 69/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 70/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jmespath-1.0.1-1.el9.noarch 71/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 72/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 73/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 74/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 75/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 76/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 77/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 79/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 80/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 81/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 82/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 83/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 84/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 85/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 86/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 87/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 88/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 89/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 90/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 91/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 92/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 93/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 94/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 95/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 96/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 97/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 98/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 99/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 100/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 101/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 102/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 103/139 2026-03-08T22:41:30.704 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 104/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 105/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 106/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 107/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 108/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 109/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 110/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 111/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 112/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 113/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 114/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 115/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 116/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 117/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 118/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 119/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 120/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 121/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 122/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 123/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 124/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 125/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 126/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 127/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 128/139 2026-03-08T22:41:30.705 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 129/139 2026-03-08T22:41:30.706 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 130/139 2026-03-08T22:41:30.706 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 131/139 2026-03-08T22:41:30.706 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-xmltodict-0.12.0-15.el9.noarch 132/139 2026-03-08T22:41:30.706 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 133/139 2026-03-08T22:41:30.706 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : re2-1:20211101-20.el9.x86_64 134/139 2026-03-08T22:41:30.706 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 135/139 2026-03-08T22:41:30.706 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 136/139 2026-03-08T22:41:30.706 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados2-2:16.2.4-5.el9.x86_64 137/139 2026-03-08T22:41:30.706 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 138/139 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librbd1-2:16.2.4-5.el9.x86_64 139/139 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout:Upgraded: 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout:Installed: 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.985 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: libxslt-1.1.34-12.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: lua-5.4.4-4.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: mailcap-2.1.49-5.el9.noarch 2026-03-08T22:41:30.986 INFO:teuthology.orchestra.run.vm00.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-jmespath-1.0.1-1.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply-3.11-14.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-08T22:41:30.987 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing-2.4.7-9.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-xmltodict-0.12.0-15.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: re2-1:20211101-20.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: socat-1.7.4.1-8.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: unzip-6.0-59.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet-1.6.1-20.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: zip-3.0-35.el9.x86_64 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:41:30.988 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:41:31.118 DEBUG:teuthology.parallel:result is None 2026-03-08T22:41:31.118 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=centos%2F9%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-08T22:41:31.772 DEBUG:teuthology.orchestra.run.vm00:> rpm -q ceph --qf '%{VERSION}-%{RELEASE}' 2026-03-08T22:41:31.795 INFO:teuthology.orchestra.run.vm00.stdout:19.2.3-678.ge911bdeb.el9 2026-03-08T22:41:31.795 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678.ge911bdeb.el9 2026-03-08T22:41:31.795 INFO:teuthology.task.install:The correct ceph version 19.2.3-678.ge911bdeb is installed. 2026-03-08T22:41:31.796 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-08T22:41:31.796 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-08T22:41:31.796 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-08T22:41:31.865 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-08T22:41:31.866 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-08T22:41:31.866 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/daemon-helper 2026-03-08T22:41:31.936 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-08T22:41:32.019 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-08T22:41:32.019 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-08T22:41:32.019 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-08T22:41:32.092 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-08T22:41:32.173 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-08T22:41:32.173 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-08T22:41:32.173 DEBUG:teuthology.orchestra.run.vm00:> sudo dd of=/usr/bin/stdin-killer 2026-03-08T22:41:32.256 DEBUG:teuthology.orchestra.run.vm00:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-08T22:41:32.330 INFO:teuthology.run_tasks:Running task workunit... 2026-03-08T22:41:32.334 INFO:tasks.workunit:Pulling workunits from ref 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-08T22:41:32.334 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-08T22:41:32.334 INFO:tasks.workunit:timeout=3h 2026-03-08T22:41:32.334 INFO:tasks.workunit:cleanup=True 2026-03-08T22:41:32.334 DEBUG:teuthology.orchestra.run.vm00:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-08T22:41:32.391 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T22:41:32.391 INFO:teuthology.orchestra.run.vm00.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-08T22:41:32.391 DEBUG:teuthology.orchestra.run.vm00:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-08T22:41:32.453 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-08T22:41:32.453 DEBUG:teuthology.orchestra.run.vm00:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-08T22:41:32.514 DEBUG:teuthology.orchestra.run.vm00:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout 569c3e99c9b32a51b4eaf08731c728f4513ed589 2026-03-08T22:41:32.581 INFO:tasks.workunit.client.0.vm00.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr:Note: switching to '569c3e99c9b32a51b4eaf08731c728f4513ed589'. 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr:state without impacting any branches by switching back to a branch. 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr: git switch -c 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr:Or undo this operation with: 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr: git switch - 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr: 2026-03-08T22:42:23.473 INFO:tasks.workunit.client.0.vm00.stderr:HEAD is now at 569c3e99c9b qa/rgw: bucket notifications use pynose 2026-03-08T22:42:23.480 DEBUG:teuthology.orchestra.run.vm00:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/standalone && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-08T22:42:23.537 DEBUG:teuthology.orchestra.run.vm00:> set -ex 2026-03-08T22:42:23.537 DEBUG:teuthology.orchestra.run.vm00:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-08T22:42:23.595 INFO:tasks.workunit:Running workunits matching mon-stretch on client.0... 2026-03-08T22:42:23.596 INFO:tasks.workunit:Running workunit mon-stretch/mon-stretch-fail-recovery.sh... 2026-03-08T22:42:23.596 DEBUG:teuthology.orchestra.run.vm00:workunit test mon-stretch/mon-stretch-fail-recovery.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh 2026-03-08T22:42:23.662 INFO:tasks.workunit.client.0.vm00.stderr:stty: 'standard input': Inappropriate ioctl for device 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:+ PS4='${BASH_SOURCE[0]}:$LINENO: ${FUNCNAME[0]}: ' 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2370: main: export PATH=.:/home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2370: main: PATH=.:/home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2371: main: export PYTHONWARNINGS=ignore 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2371: main: PYTHONWARNINGS=ignore 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2372: main: export CEPH_CONF=/dev/null 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2372: main: CEPH_CONF=/dev/null 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2373: main: unset CEPH_ARGS 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2375: main: local code 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2376: main: run td/mon-stretch-fail-recovery 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:5: run: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:6: run: shift 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:8: run: export CEPH_MON_A=127.0.0.1:7139 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:8: run: CEPH_MON_A=127.0.0.1:7139 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:9: run: export CEPH_MON_B=127.0.0.1:7141 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:9: run: CEPH_MON_B=127.0.0.1:7141 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:10: run: export CEPH_MON_C=127.0.0.1:7142 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:10: run: CEPH_MON_C=127.0.0.1:7142 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:11: run: export CEPH_MON_D=127.0.0.1:7143 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:11: run: CEPH_MON_D=127.0.0.1:7143 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:12: run: export CEPH_MON_E=127.0.0.1:7144 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:12: run: CEPH_MON_E=127.0.0.1:7144 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:13: run: export CEPH_ARGS 2026-03-08T22:42:23.666 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:14: run: uuidgen 2026-03-08T22:42:23.667 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:14: run: CEPH_ARGS+='--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none ' 2026-03-08T22:42:23.667 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:16: run: export 'BASE_CEPH_ARGS=--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none ' 2026-03-08T22:42:23.667 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:16: run: BASE_CEPH_ARGS='--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none ' 2026-03-08T22:42:23.667 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:17: run: CEPH_ARGS+=--mon-host=127.0.0.1:7139 2026-03-08T22:42:23.667 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:19: run: set 2026-03-08T22:42:23.667 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:19: run: sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p' 2026-03-08T22:42:23.669 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:19: run: local funcs=TEST_stretched_cluster_failover_add_three_osds 2026-03-08T22:42:23.669 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:20: run: for func in $funcs 2026-03-08T22:42:23.669 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:21: run: setup td/mon-stretch-fail-recovery 2026-03-08T22:42:23.669 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:131: setup: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:23.669 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:132: setup: teardown td/mon-stretch-fail-recovery 2026-03-08T22:42:23.669 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:164: teardown: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:23.669 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:165: teardown: local dumplogs= 2026-03-08T22:42:23.669 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:166: teardown: kill_daemons td/mon-stretch-fail-recovery KILL 2026-03-08T22:42:23.669 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:42:23.669 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:42:23.669 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:42:23.669 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:42:23.669 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:42:23.671 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:42:23.671 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: uname 2026-03-08T22:42:23.672 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: '[' Linux '!=' FreeBSD ']' 2026-03-08T22:42:23.672 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: stat -f -c %T . 2026-03-08T22:42:23.673 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: '[' xfs == btrfs ']' 2026-03-08T22:42:23.673 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:171: teardown: local cores=no 2026-03-08T22:42:23.673 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: sysctl -n kernel.core_pattern 2026-03-08T22:42:23.674 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: local pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:42:23.674 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:174: teardown: '[' / = '|' ']' 2026-03-08T22:42:23.675 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: grep -q '^core\|core$' 2026-03-08T22:42:23.675 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: dirname /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:42:23.676 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: ls /home/ubuntu/cephtest/archive/coredump 2026-03-08T22:42:23.677 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:189: teardown: '[' no = yes -o '' = 1 ']' 2026-03-08T22:42:23.677 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:198: teardown: rm -fr td/mon-stretch-fail-recovery 2026-03-08T22:42:23.678 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: get_asok_dir 2026-03-08T22:42:23.678 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:23.678 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:23.678 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: rm -rf /tmp/ceph-asok.51725 2026-03-08T22:42:23.679 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:200: teardown: '[' no = yes ']' 2026-03-08T22:42:23.679 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:207: teardown: return 0 2026-03-08T22:42:23.679 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:133: setup: mkdir -p td/mon-stretch-fail-recovery 2026-03-08T22:42:23.680 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:134: setup: get_asok_dir 2026-03-08T22:42:23.680 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:23.680 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:23.680 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:134: setup: mkdir -p /tmp/ceph-asok.51725 2026-03-08T22:42:23.681 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:135: setup: ulimit -n 2026-03-08T22:42:23.681 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:135: setup: '[' 1024 -le 1024 ']' 2026-03-08T22:42:23.681 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:136: setup: ulimit -n 4096 2026-03-08T22:42:23.681 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:138: setup: '[' -z '' ']' 2026-03-08T22:42:23.681 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:139: setup: trap 'teardown td/mon-stretch-fail-recovery 1' TERM HUP INT 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:22: run: TEST_stretched_cluster_failover_add_three_osds td/mon-stretch-fail-recovery 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:27: TEST_stretched_cluster_failover_add_three_osds: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:28: TEST_stretched_cluster_failover_add_three_osds: local OSDS=8 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:29: TEST_stretched_cluster_failover_add_three_osds: setup td/mon-stretch-fail-recovery 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:131: setup: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:132: setup: teardown td/mon-stretch-fail-recovery 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:164: teardown: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:165: teardown: local dumplogs= 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:166: teardown: kill_daemons td/mon-stretch-fail-recovery KILL 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:42:23.682 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:42:23.683 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:42:23.683 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: uname 2026-03-08T22:42:23.684 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: '[' Linux '!=' FreeBSD ']' 2026-03-08T22:42:23.684 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: stat -f -c %T . 2026-03-08T22:42:23.685 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: '[' xfs == btrfs ']' 2026-03-08T22:42:23.685 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:171: teardown: local cores=no 2026-03-08T22:42:23.685 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: sysctl -n kernel.core_pattern 2026-03-08T22:42:23.686 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: local pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:42:23.686 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:174: teardown: '[' / = '|' ']' 2026-03-08T22:42:23.686 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: grep -q '^core\|core$' 2026-03-08T22:42:23.686 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: dirname /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:42:23.687 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: ls /home/ubuntu/cephtest/archive/coredump 2026-03-08T22:42:23.688 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:189: teardown: '[' no = yes -o '' = 1 ']' 2026-03-08T22:42:23.688 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:198: teardown: rm -fr td/mon-stretch-fail-recovery 2026-03-08T22:42:23.688 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: get_asok_dir 2026-03-08T22:42:23.689 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:23.689 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:23.689 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: rm -rf /tmp/ceph-asok.51725 2026-03-08T22:42:23.689 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:200: teardown: '[' no = yes ']' 2026-03-08T22:42:23.689 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:207: teardown: return 0 2026-03-08T22:42:23.689 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:133: setup: mkdir -p td/mon-stretch-fail-recovery 2026-03-08T22:42:23.690 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:134: setup: get_asok_dir 2026-03-08T22:42:23.690 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:23.690 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:23.691 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:134: setup: mkdir -p /tmp/ceph-asok.51725 2026-03-08T22:42:23.691 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:135: setup: ulimit -n 2026-03-08T22:42:23.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:135: setup: '[' 4096 -le 1024 ']' 2026-03-08T22:42:23.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:138: setup: '[' -z '' ']' 2026-03-08T22:42:23.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:139: setup: trap 'teardown td/mon-stretch-fail-recovery 1' TERM HUP INT 2026-03-08T22:42:23.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:31: TEST_stretched_cluster_failover_add_three_osds: run_mon td/mon-stretch-fail-recovery a --public-addr 127.0.0.1:7139 2026-03-08T22:42:23.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:448: run_mon: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:23.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:449: run_mon: shift 2026-03-08T22:42:23.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:450: run_mon: local id=a 2026-03-08T22:42:23.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:451: run_mon: shift 2026-03-08T22:42:23.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:452: run_mon: local data=td/mon-stretch-fail-recovery/a 2026-03-08T22:42:23.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:455: run_mon: ceph-mon --id a --mkfs --mon-data=td/mon-stretch-fail-recovery/a --run-dir=td/mon-stretch-fail-recovery --public-addr 127.0.0.1:7139 2026-03-08T22:42:23.752 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: get_asok_path 2026-03-08T22:42:23.752 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:42:23.752 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:42:23.752 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:42:23.752 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:23.752 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:23.752 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:42:23.752 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: ceph-mon --id a --osd-failsafe-full-ratio=.99 --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --mon-data-avail-warn=5 --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=td/mon-stretch-fail-recovery/a '--log-file=td/mon-stretch-fail-recovery/$name.log' '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --mon-cluster-log-file=td/mon-stretch-fail-recovery/log --run-dir=td/mon-stretch-fail-recovery '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --mon-allow-pool-delete --mon-allow-pool-size-one --osd-pool-default-pg-autoscale-mode off --mon-osd-backfillfull-ratio .99 --mon-warn-on-insecure-global-id-reclaim-allowed=false --public-addr 127.0.0.1:7139 2026-03-08T22:42:23.792 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: cat 2026-03-08T22:42:23.793 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon a fsid 2026-03-08T22:42:23.793 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:42:23.793 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=a 2026-03-08T22:42:23.793 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=fsid 2026-03-08T22:42:23.793 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.a 2026-03-08T22:42:23.793 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .fsid 2026-03-08T22:42:23.793 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.a 2026-03-08T22:42:23.793 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.a ']' 2026-03-08T22:42:23.793 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:42:23.793 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:23.793 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:23.794 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.51725/ceph-mon.a.asok 2026-03-08T22:42:23.794 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:42:23.794 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.51725/ceph-mon.a.asok config get fsid 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon a mon_host 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=a 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=mon_host 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .mon_host 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.a 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.a 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.a ']' 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.51725/ceph-mon.a.asok 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:42:23.856 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.51725/ceph-mon.a.asok config get mon_host 2026-03-08T22:42:23.909 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:32: TEST_stretched_cluster_failover_add_three_osds: wait_for_quorum 300 1 2026-03-08T22:42:23.909 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1055: wait_for_quorum: local timeout=300 2026-03-08T22:42:23.909 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1056: wait_for_quorum: local quorumsize=1 2026-03-08T22:42:23.909 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1058: wait_for_quorum: [[ -z 300 ]] 2026-03-08T22:42:23.909 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1062: wait_for_quorum: [[ -z 1 ]] 2026-03-08T22:42:23.909 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1067: wait_for_quorum: no_quorum=1 2026-03-08T22:42:23.909 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: date +%s 2026-03-08T22:42:23.911 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: wait_until=1773010043 2026-03-08T22:42:23.911 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:42:23.912 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009743 -lt 1773010043 ]] 2026-03-08T22:42:23.912 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 1' 2026-03-08T22:42:23.912 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:42:24.037 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:42:24.038 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":3,"quorum":[0],"quorum_names":["a"],"quorum_leader_name":"a","quorum_age":0,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":1,"fsid":"e6beb2c8-8f22-428a-b327-33ee467015ad","modified":"2026-03-08T22:42:23.707250Z","created":"2026-03-08T22:42:23.707250Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:42:24.038 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":3,"quorum":[0],"quorum_names":["a"],"quorum_leader_name":"a","quorum_age":0,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":1,"fsid":"e6beb2c8-8f22-428a-b327-33ee467015ad","modified":"2026-03-08T22:42:23.707250Z","created":"2026-03-08T22:42:23.707250Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:42:24.038 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 1' 2026-03-08T22:42:24.040 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=true 2026-03-08T22:42:24.040 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ true == \t\r\u\e ]] 2026-03-08T22:42:24.040 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1074: wait_for_quorum: no_quorum=0 2026-03-08T22:42:24.040 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1075: wait_for_quorum: break 2026-03-08T22:42:24.041 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1078: wait_for_quorum: return 0 2026-03-08T22:42:24.041 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:34: TEST_stretched_cluster_failover_add_three_osds: run_mon td/mon-stretch-fail-recovery b --public-addr 127.0.0.1:7141 2026-03-08T22:42:24.041 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:448: run_mon: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:24.041 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:449: run_mon: shift 2026-03-08T22:42:24.041 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:450: run_mon: local id=b 2026-03-08T22:42:24.041 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:451: run_mon: shift 2026-03-08T22:42:24.041 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:452: run_mon: local data=td/mon-stretch-fail-recovery/b 2026-03-08T22:42:24.041 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:455: run_mon: ceph-mon --id b --mkfs --mon-data=td/mon-stretch-fail-recovery/b --run-dir=td/mon-stretch-fail-recovery --public-addr 127.0.0.1:7141 2026-03-08T22:42:24.081 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: get_asok_path 2026-03-08T22:42:24.081 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:42:24.081 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:42:24.081 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:42:24.081 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:24.081 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:24.082 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:42:24.082 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: ceph-mon --id b --osd-failsafe-full-ratio=.99 --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --mon-data-avail-warn=5 --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=td/mon-stretch-fail-recovery/b '--log-file=td/mon-stretch-fail-recovery/$name.log' '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --mon-cluster-log-file=td/mon-stretch-fail-recovery/log --run-dir=td/mon-stretch-fail-recovery '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --mon-allow-pool-delete --mon-allow-pool-size-one --osd-pool-default-pg-autoscale-mode off --mon-osd-backfillfull-ratio .99 --mon-warn-on-insecure-global-id-reclaim-allowed=false --public-addr 127.0.0.1:7141 2026-03-08T22:42:24.132 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: cat 2026-03-08T22:42:24.132 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon b fsid 2026-03-08T22:42:24.132 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:42:24.132 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=b 2026-03-08T22:42:24.132 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=fsid 2026-03-08T22:42:24.132 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .fsid 2026-03-08T22:42:24.133 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.b 2026-03-08T22:42:24.133 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.b 2026-03-08T22:42:24.133 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.b ']' 2026-03-08T22:42:24.135 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:42:24.135 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:24.135 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:24.135 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.51725/ceph-mon.b.asok 2026-03-08T22:42:24.135 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:42:24.135 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.51725/ceph-mon.b.asok config get fsid 2026-03-08T22:42:24.191 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon b mon_host 2026-03-08T22:42:24.191 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:42:24.191 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=b 2026-03-08T22:42:24.191 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=mon_host 2026-03-08T22:42:24.191 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .mon_host 2026-03-08T22:42:24.191 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.b 2026-03-08T22:42:24.191 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.b 2026-03-08T22:42:24.191 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.b ']' 2026-03-08T22:42:24.191 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:42:24.192 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:24.192 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:24.192 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.51725/ceph-mon.b.asok 2026-03-08T22:42:24.192 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:42:24.192 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.51725/ceph-mon.b.asok config get mon_host 2026-03-08T22:42:24.253 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:35: TEST_stretched_cluster_failover_add_three_osds: CEPH_ARGS='--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141' 2026-03-08T22:42:24.253 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:36: TEST_stretched_cluster_failover_add_three_osds: wait_for_quorum 300 2 2026-03-08T22:42:24.254 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1055: wait_for_quorum: local timeout=300 2026-03-08T22:42:24.254 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1056: wait_for_quorum: local quorumsize=2 2026-03-08T22:42:24.254 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1058: wait_for_quorum: [[ -z 300 ]] 2026-03-08T22:42:24.254 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1062: wait_for_quorum: [[ -z 2 ]] 2026-03-08T22:42:24.254 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1067: wait_for_quorum: no_quorum=1 2026-03-08T22:42:24.254 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: date +%s 2026-03-08T22:42:24.254 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: wait_until=1773010044 2026-03-08T22:42:24.258 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:42:24.258 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009744 -lt 1773010044 ]] 2026-03-08T22:42:24.258 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 2' 2026-03-08T22:42:24.259 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:42:30.373 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:42:30.373 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":8,"quorum":[0,1],"quorum_names":["a","b"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":2,"fsid":"e6beb2c8-8f22-428a-b327-33ee467015ad","modified":"2026-03-08T22:42:24.133992Z","created":"2026-03-08T22:42:23.707250Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:42:30.373 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":8,"quorum":[0,1],"quorum_names":["a","b"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":2,"fsid":"e6beb2c8-8f22-428a-b327-33ee467015ad","modified":"2026-03-08T22:42:24.133992Z","created":"2026-03-08T22:42:23.707250Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:42:30.373 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 2' 2026-03-08T22:42:30.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=true 2026-03-08T22:42:30.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ true == \t\r\u\e ]] 2026-03-08T22:42:30.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1074: wait_for_quorum: no_quorum=0 2026-03-08T22:42:30.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1075: wait_for_quorum: break 2026-03-08T22:42:30.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1078: wait_for_quorum: return 0 2026-03-08T22:42:30.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:38: TEST_stretched_cluster_failover_add_three_osds: run_mon td/mon-stretch-fail-recovery c --public-addr 127.0.0.1:7142 2026-03-08T22:42:30.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:448: run_mon: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:30.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:449: run_mon: shift 2026-03-08T22:42:30.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:450: run_mon: local id=c 2026-03-08T22:42:30.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:451: run_mon: shift 2026-03-08T22:42:30.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:452: run_mon: local data=td/mon-stretch-fail-recovery/c 2026-03-08T22:42:30.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:455: run_mon: ceph-mon --id c --mkfs --mon-data=td/mon-stretch-fail-recovery/c --run-dir=td/mon-stretch-fail-recovery --public-addr 127.0.0.1:7142 2026-03-08T22:42:30.402 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: get_asok_path 2026-03-08T22:42:30.402 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:42:30.403 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:42:30.403 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:42:30.403 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:30.403 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:30.403 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:42:30.403 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: ceph-mon --id c --osd-failsafe-full-ratio=.99 --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --mon-data-avail-warn=5 --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=td/mon-stretch-fail-recovery/c '--log-file=td/mon-stretch-fail-recovery/$name.log' '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --mon-cluster-log-file=td/mon-stretch-fail-recovery/log --run-dir=td/mon-stretch-fail-recovery '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --mon-allow-pool-delete --mon-allow-pool-size-one --osd-pool-default-pg-autoscale-mode off --mon-osd-backfillfull-ratio .99 --mon-warn-on-insecure-global-id-reclaim-allowed=false --public-addr 127.0.0.1:7142 2026-03-08T22:42:30.438 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: cat 2026-03-08T22:42:30.439 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon c fsid 2026-03-08T22:42:30.439 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:42:30.439 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=c 2026-03-08T22:42:30.439 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=fsid 2026-03-08T22:42:30.439 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .fsid 2026-03-08T22:42:30.440 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.c 2026-03-08T22:42:30.440 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.c 2026-03-08T22:42:30.440 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.c ']' 2026-03-08T22:42:30.441 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:42:30.441 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:30.441 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:30.441 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.51725/ceph-mon.c.asok 2026-03-08T22:42:30.443 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:42:30.443 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.51725/ceph-mon.c.asok config get fsid 2026-03-08T22:42:30.499 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon c mon_host 2026-03-08T22:42:30.500 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:42:30.500 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=c 2026-03-08T22:42:30.500 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=mon_host 2026-03-08T22:42:30.500 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .mon_host 2026-03-08T22:42:30.500 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.c 2026-03-08T22:42:30.500 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.c 2026-03-08T22:42:30.500 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.c ']' 2026-03-08T22:42:30.500 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:42:30.500 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:30.500 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:30.500 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.51725/ceph-mon.c.asok 2026-03-08T22:42:30.501 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:42:30.501 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.51725/ceph-mon.c.asok config get mon_host 2026-03-08T22:42:30.558 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:39: TEST_stretched_cluster_failover_add_three_osds: CEPH_ARGS='--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142' 2026-03-08T22:42:30.558 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:40: TEST_stretched_cluster_failover_add_three_osds: wait_for_quorum 300 3 2026-03-08T22:42:30.558 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1055: wait_for_quorum: local timeout=300 2026-03-08T22:42:30.558 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1056: wait_for_quorum: local quorumsize=3 2026-03-08T22:42:30.558 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1058: wait_for_quorum: [[ -z 300 ]] 2026-03-08T22:42:30.558 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1062: wait_for_quorum: [[ -z 3 ]] 2026-03-08T22:42:30.558 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1067: wait_for_quorum: no_quorum=1 2026-03-08T22:42:30.559 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: date +%s 2026-03-08T22:42:30.559 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: wait_until=1773010050 2026-03-08T22:42:30.560 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:42:30.560 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009750 -lt 1773010050 ]] 2026-03-08T22:42:30.560 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 3' 2026-03-08T22:42:30.561 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:42:39.685 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:42:39.685 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":12,"quorum":[0,1,2],"quorum_names":["a","b","c"],"quorum_leader_name":"a","quorum_age":2,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":3,"fsid":"e6beb2c8-8f22-428a-b327-33ee467015ad","modified":"2026-03-08T22:42:30.442223Z","created":"2026-03-08T22:42:23.707250Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:42:39.686 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 3' 2026-03-08T22:42:39.686 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":12,"quorum":[0,1,2],"quorum_names":["a","b","c"],"quorum_leader_name":"a","quorum_age":2,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":3,"fsid":"e6beb2c8-8f22-428a-b327-33ee467015ad","modified":"2026-03-08T22:42:30.442223Z","created":"2026-03-08T22:42:23.707250Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:42:39.688 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=true 2026-03-08T22:42:39.690 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ true == \t\r\u\e ]] 2026-03-08T22:42:39.690 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1074: wait_for_quorum: no_quorum=0 2026-03-08T22:42:39.690 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1075: wait_for_quorum: break 2026-03-08T22:42:39.691 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1078: wait_for_quorum: return 0 2026-03-08T22:42:39.691 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:42: TEST_stretched_cluster_failover_add_three_osds: run_mon td/mon-stretch-fail-recovery d --public-addr 127.0.0.1:7143 2026-03-08T22:42:39.691 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:448: run_mon: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:39.691 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:449: run_mon: shift 2026-03-08T22:42:39.691 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:450: run_mon: local id=d 2026-03-08T22:42:39.691 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:451: run_mon: shift 2026-03-08T22:42:39.691 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:452: run_mon: local data=td/mon-stretch-fail-recovery/d 2026-03-08T22:42:39.691 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:455: run_mon: ceph-mon --id d --mkfs --mon-data=td/mon-stretch-fail-recovery/d --run-dir=td/mon-stretch-fail-recovery --public-addr 127.0.0.1:7143 2026-03-08T22:42:39.764 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: get_asok_path 2026-03-08T22:42:39.764 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:42:39.764 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:42:39.764 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:42:39.764 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:39.764 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:39.764 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:42:39.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: ceph-mon --id d --osd-failsafe-full-ratio=.99 --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --mon-data-avail-warn=5 --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=td/mon-stretch-fail-recovery/d '--log-file=td/mon-stretch-fail-recovery/$name.log' '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --mon-cluster-log-file=td/mon-stretch-fail-recovery/log --run-dir=td/mon-stretch-fail-recovery '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --mon-allow-pool-delete --mon-allow-pool-size-one --osd-pool-default-pg-autoscale-mode off --mon-osd-backfillfull-ratio .99 --mon-warn-on-insecure-global-id-reclaim-allowed=false --public-addr 127.0.0.1:7143 2026-03-08T22:42:39.811 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: cat 2026-03-08T22:42:39.811 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon d fsid 2026-03-08T22:42:39.811 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:42:39.811 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=d 2026-03-08T22:42:39.811 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=fsid 2026-03-08T22:42:39.811 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .fsid 2026-03-08T22:42:39.812 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.d 2026-03-08T22:42:39.812 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.d 2026-03-08T22:42:39.812 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.d ']' 2026-03-08T22:42:39.812 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:42:39.813 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:39.813 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:39.813 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.51725/ceph-mon.d.asok 2026-03-08T22:42:39.813 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:42:39.814 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.51725/ceph-mon.d.asok config get fsid 2026-03-08T22:42:39.874 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon d mon_host 2026-03-08T22:42:39.874 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:42:39.874 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=d 2026-03-08T22:42:39.874 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=mon_host 2026-03-08T22:42:39.874 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .mon_host 2026-03-08T22:42:39.874 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.d 2026-03-08T22:42:39.874 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.d 2026-03-08T22:42:39.875 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.d ']' 2026-03-08T22:42:39.875 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:42:39.875 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:39.875 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:39.875 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.51725/ceph-mon.d.asok 2026-03-08T22:42:39.875 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:42:39.875 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.51725/ceph-mon.d.asok config get mon_host 2026-03-08T22:42:39.927 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:43: TEST_stretched_cluster_failover_add_three_osds: CEPH_ARGS='--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143' 2026-03-08T22:42:39.927 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:44: TEST_stretched_cluster_failover_add_three_osds: wait_for_quorum 300 4 2026-03-08T22:42:39.927 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1055: wait_for_quorum: local timeout=300 2026-03-08T22:42:39.927 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1056: wait_for_quorum: local quorumsize=4 2026-03-08T22:42:39.927 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1058: wait_for_quorum: [[ -z 300 ]] 2026-03-08T22:42:39.927 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1062: wait_for_quorum: [[ -z 4 ]] 2026-03-08T22:42:39.927 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1067: wait_for_quorum: no_quorum=1 2026-03-08T22:42:39.927 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: date +%s 2026-03-08T22:42:39.928 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: wait_until=1773010059 2026-03-08T22:42:39.928 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:42:39.930 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009759 -lt 1773010059 ]] 2026-03-08T22:42:39.930 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 4' 2026-03-08T22:42:39.930 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:42:49.061 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:42:49.061 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":16,"quorum":[0,1,2,3],"quorum_names":["a","b","c","d"],"quorum_leader_name":"a","quorum_age":4,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":4,"fsid":"e6beb2c8-8f22-428a-b327-33ee467015ad","modified":"2026-03-08T22:42:39.838105Z","created":"2026-03-08T22:42:23.707250Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:42:49.061 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 4' 2026-03-08T22:42:49.061 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":16,"quorum":[0,1,2,3],"quorum_names":["a","b","c","d"],"quorum_leader_name":"a","quorum_age":4,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":4,"fsid":"e6beb2c8-8f22-428a-b327-33ee467015ad","modified":"2026-03-08T22:42:39.838105Z","created":"2026-03-08T22:42:23.707250Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:42:49.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=true 2026-03-08T22:42:49.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ true == \t\r\u\e ]] 2026-03-08T22:42:49.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1074: wait_for_quorum: no_quorum=0 2026-03-08T22:42:49.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1075: wait_for_quorum: break 2026-03-08T22:42:49.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1078: wait_for_quorum: return 0 2026-03-08T22:42:49.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:46: TEST_stretched_cluster_failover_add_three_osds: run_mon td/mon-stretch-fail-recovery e --public-addr 127.0.0.1:7144 2026-03-08T22:42:49.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:448: run_mon: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:49.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:449: run_mon: shift 2026-03-08T22:42:49.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:450: run_mon: local id=e 2026-03-08T22:42:49.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:451: run_mon: shift 2026-03-08T22:42:49.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:452: run_mon: local data=td/mon-stretch-fail-recovery/e 2026-03-08T22:42:49.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:455: run_mon: ceph-mon --id e --mkfs --mon-data=td/mon-stretch-fail-recovery/e --run-dir=td/mon-stretch-fail-recovery --public-addr 127.0.0.1:7144 2026-03-08T22:42:49.096 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: get_asok_path 2026-03-08T22:42:49.096 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:42:49.096 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:42:49.096 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:42:49.096 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:49.096 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:49.097 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:42:49.097 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: ceph-mon --id e --osd-failsafe-full-ratio=.99 --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --mon-data-avail-warn=5 --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=td/mon-stretch-fail-recovery/e '--log-file=td/mon-stretch-fail-recovery/$name.log' '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --mon-cluster-log-file=td/mon-stretch-fail-recovery/log --run-dir=td/mon-stretch-fail-recovery '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --mon-allow-pool-delete --mon-allow-pool-size-one --osd-pool-default-pg-autoscale-mode off --mon-osd-backfillfull-ratio .99 --mon-warn-on-insecure-global-id-reclaim-allowed=false --public-addr 127.0.0.1:7144 2026-03-08T22:42:49.133 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: cat 2026-03-08T22:42:49.134 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon e fsid 2026-03-08T22:42:49.134 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:42:49.134 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=e 2026-03-08T22:42:49.134 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=fsid 2026-03-08T22:42:49.135 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .fsid 2026-03-08T22:42:49.135 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.e 2026-03-08T22:42:49.135 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.e 2026-03-08T22:42:49.135 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.e ']' 2026-03-08T22:42:49.135 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:42:49.135 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:49.135 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:49.135 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.51725/ceph-mon.e.asok 2026-03-08T22:42:49.136 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:42:49.136 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.51725/ceph-mon.e.asok config get fsid 2026-03-08T22:42:49.208 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon e mon_host 2026-03-08T22:42:49.209 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:42:49.209 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=e 2026-03-08T22:42:49.209 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=mon_host 2026-03-08T22:42:49.209 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .mon_host 2026-03-08T22:42:49.209 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.e 2026-03-08T22:42:49.209 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.e 2026-03-08T22:42:49.209 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.e ']' 2026-03-08T22:42:49.209 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:42:49.209 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:49.209 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:49.209 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.51725/ceph-mon.e.asok 2026-03-08T22:42:49.210 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:42:49.210 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.51725/ceph-mon.e.asok config get mon_host 2026-03-08T22:42:49.267 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:47: TEST_stretched_cluster_failover_add_three_osds: CEPH_ARGS='--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:42:49.267 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:48: TEST_stretched_cluster_failover_add_three_osds: wait_for_quorum 300 5 2026-03-08T22:42:49.267 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1055: wait_for_quorum: local timeout=300 2026-03-08T22:42:49.267 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1056: wait_for_quorum: local quorumsize=5 2026-03-08T22:42:49.267 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1058: wait_for_quorum: [[ -z 300 ]] 2026-03-08T22:42:49.267 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1062: wait_for_quorum: [[ -z 5 ]] 2026-03-08T22:42:49.267 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1067: wait_for_quorum: no_quorum=1 2026-03-08T22:42:49.267 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: date +%s 2026-03-08T22:42:49.268 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: wait_until=1773010069 2026-03-08T22:42:49.268 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:42:49.269 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009769 -lt 1773010069 ]] 2026-03-08T22:42:49.269 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 5' 2026-03-08T22:42:49.270 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:42:55.401 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:42:55.401 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":20,"quorum":[0,1,2,3,4],"quorum_names":["a","b","c","d","e"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":5,"fsid":"e6beb2c8-8f22-428a-b327-33ee467015ad","modified":"2026-03-08T22:42:49.142794Z","created":"2026-03-08T22:42:23.707250Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":4,"name":"e","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7144","nonce":0}]},"addr":"127.0.0.1:7144/0","public_addr":"127.0.0.1:7144/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:42:55.401 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":20,"quorum":[0,1,2,3,4],"quorum_names":["a","b","c","d","e"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":5,"fsid":"e6beb2c8-8f22-428a-b327-33ee467015ad","modified":"2026-03-08T22:42:49.142794Z","created":"2026-03-08T22:42:23.707250Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":4,"name":"e","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7144","nonce":0}]},"addr":"127.0.0.1:7144/0","public_addr":"127.0.0.1:7144/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:42:55.401 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 5' 2026-03-08T22:42:55.403 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=true 2026-03-08T22:42:55.403 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ true == \t\r\u\e ]] 2026-03-08T22:42:55.403 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1074: wait_for_quorum: no_quorum=0 2026-03-08T22:42:55.403 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1075: wait_for_quorum: break 2026-03-08T22:42:55.403 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1078: wait_for_quorum: return 0 2026-03-08T22:42:55.403 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:50: TEST_stretched_cluster_failover_add_three_osds: ceph mon set election_strategy connectivity 2026-03-08T22:42:55.568 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:51: TEST_stretched_cluster_failover_add_three_osds: ceph mon add disallowed_leader e 2026-03-08T22:42:55.725 INFO:tasks.workunit.client.0.vm00.stderr:mon.e is already disallowed 2026-03-08T22:42:55.735 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:53: TEST_stretched_cluster_failover_add_three_osds: run_mgr td/mon-stretch-fail-recovery x 2026-03-08T22:42:55.735 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:553: run_mgr: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:55.735 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:554: run_mgr: shift 2026-03-08T22:42:55.735 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:555: run_mgr: local id=x 2026-03-08T22:42:55.735 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:556: run_mgr: shift 2026-03-08T22:42:55.735 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:557: run_mgr: local data=td/mon-stretch-fail-recovery/x 2026-03-08T22:42:55.735 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:559: run_mgr: ceph config set mgr mgr_pool false --force 2026-03-08T22:42:55.879 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: get_asok_path 2026-03-08T22:42:55.879 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:42:55.879 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:42:55.880 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:42:55.880 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:55.880 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:55.880 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:42:55.881 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: realpath /home/ubuntu/cephtest/clone.client.0/src/pybind/mgr 2026-03-08T22:42:55.882 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: ceph-mgr --id x --osd-failsafe-full-ratio=.99 --debug-mgr 20 --debug-objecter 20 --debug-ms 20 --debug-paxos 20 --chdir= --mgr-data=td/mon-stretch-fail-recovery/x '--log-file=td/mon-stretch-fail-recovery/$name.log' '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --run-dir=td/mon-stretch-fail-recovery '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --mgr-module-path=/home/ubuntu/cephtest/clone.client.0/src/pybind/mgr 2026-03-08T22:42:55.905 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:54: TEST_stretched_cluster_failover_add_three_osds: run_mgr td/mon-stretch-fail-recovery y 2026-03-08T22:42:55.905 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:553: run_mgr: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:55.905 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:554: run_mgr: shift 2026-03-08T22:42:55.905 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:555: run_mgr: local id=y 2026-03-08T22:42:55.905 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:556: run_mgr: shift 2026-03-08T22:42:55.905 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:557: run_mgr: local data=td/mon-stretch-fail-recovery/y 2026-03-08T22:42:55.905 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:559: run_mgr: ceph config set mgr mgr_pool false --force 2026-03-08T22:42:56.043 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: get_asok_path 2026-03-08T22:42:56.043 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:42:56.043 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:42:56.043 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:42:56.043 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:56.043 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:56.043 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:42:56.043 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: realpath /home/ubuntu/cephtest/clone.client.0/src/pybind/mgr 2026-03-08T22:42:56.044 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: ceph-mgr --id y --osd-failsafe-full-ratio=.99 --debug-mgr 20 --debug-objecter 20 --debug-ms 20 --debug-paxos 20 --chdir= --mgr-data=td/mon-stretch-fail-recovery/y '--log-file=td/mon-stretch-fail-recovery/$name.log' '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --run-dir=td/mon-stretch-fail-recovery '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --mgr-module-path=/home/ubuntu/cephtest/clone.client.0/src/pybind/mgr 2026-03-08T22:42:56.067 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:55: TEST_stretched_cluster_failover_add_three_osds: run_mgr td/mon-stretch-fail-recovery z 2026-03-08T22:42:56.067 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:553: run_mgr: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:56.067 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:554: run_mgr: shift 2026-03-08T22:42:56.067 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:555: run_mgr: local id=z 2026-03-08T22:42:56.067 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:556: run_mgr: shift 2026-03-08T22:42:56.067 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:557: run_mgr: local data=td/mon-stretch-fail-recovery/z 2026-03-08T22:42:56.067 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:559: run_mgr: ceph config set mgr mgr_pool false --force 2026-03-08T22:42:59.255 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: get_asok_path 2026-03-08T22:42:59.255 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:42:59.255 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:42:59.255 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:42:59.255 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:59.255 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:59.255 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:42:59.255 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: realpath /home/ubuntu/cephtest/clone.client.0/src/pybind/mgr 2026-03-08T22:42:59.256 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: ceph-mgr --id z --osd-failsafe-full-ratio=.99 --debug-mgr 20 --debug-objecter 20 --debug-ms 20 --debug-paxos 20 --chdir= --mgr-data=td/mon-stretch-fail-recovery/z '--log-file=td/mon-stretch-fail-recovery/$name.log' '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --run-dir=td/mon-stretch-fail-recovery '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --mgr-module-path=/home/ubuntu/cephtest/clone.client.0/src/pybind/mgr 2026-03-08T22:42:59.280 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:57: TEST_stretched_cluster_failover_add_three_osds: expr 8 - 1 2026-03-08T22:42:59.281 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:57: TEST_stretched_cluster_failover_add_three_osds: seq 0 7 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:57: TEST_stretched_cluster_failover_add_three_osds: for osd in $(seq 0 $(expr $OSDS - 1)) 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:59: TEST_stretched_cluster_failover_add_three_osds: run_osd td/mon-stretch-fail-recovery 0 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=0 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretch-fail-recovery/0 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretch-fail-recovery/0' 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretch-fail-recovery/0/journal' 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretch-fail-recovery' 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:42:59.283 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretch-fail-recovery/$name.log' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretch-fail-recovery/$name.pid' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:42:59.284 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretch-fail-recovery/0 2026-03-08T22:42:59.285 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:42:59.285 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=be917770-fb46-494d-a75e-030e3b6ea428 2026-03-08T22:42:59.286 INFO:tasks.workunit.client.0.vm00.stdout:add osd0 be917770-fb46-494d-a75e-030e3b6ea428 2026-03-08T22:42:59.286 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd0 be917770-fb46-494d-a75e-030e3b6ea428' 2026-03-08T22:42:59.286 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:42:59.299 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQBz+61p2+fAERAAjT6hjOeenEHNiWgg8AZ/gw== 2026-03-08T22:42:59.299 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQBz+61p2+fAERAAjT6hjOeenEHNiWgg8AZ/gw=="}' 2026-03-08T22:42:59.299 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new be917770-fb46-494d-a75e-030e3b6ea428 -i td/mon-stretch-fail-recovery/0/new.json 2026-03-08T22:42:59.554 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:42:59.563 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretch-fail-recovery/0/new.json 2026-03-08T22:42:59.564 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 0 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/0 --osd-journal=td/mon-stretch-fail-recovery/0/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQBz+61p2+fAERAAjT6hjOeenEHNiWgg8AZ/gw== --osd-uuid be917770-fb46-494d-a75e-030e3b6ea428 2026-03-08T22:42:59.585 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:42:59.582+0000 7f577fe21780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:42:59.589 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:42:59.588+0000 7f577fe21780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:42:59.592 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:42:59.590+0000 7f577fe21780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:42:59.592 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:42:59.590+0000 7f577fe21780 -1 bdev(0x55769e681c00 td/mon-stretch-fail-recovery/0/block) open stat got: (1) Operation not permitted 2026-03-08T22:42:59.592 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:42:59.591+0000 7f577fe21780 -1 bluestore(td/mon-stretch-fail-recovery/0) _read_fsid unparsable uuid 2026-03-08T22:43:01.795 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretch-fail-recovery/0/keyring 2026-03-08T22:43:01.795 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:43:01.796 INFO:tasks.workunit.client.0.vm00.stdout:adding osd0 key to auth repository 2026-03-08T22:43:01.796 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd0 key to auth repository 2026-03-08T22:43:01.796 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretch-fail-recovery/0/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:43:02.118 INFO:tasks.workunit.client.0.vm00.stdout:start osd.0 2026-03-08T22:43:02.118 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.0 2026-03-08T22:43:02.118 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 0 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/0 --osd-journal=td/mon-stretch-fail-recovery/0/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:43:02.118 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:43:02.120 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:43:02.122 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:43:02.138 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:02.135+0000 7fac86414780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:02.141 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:02.140+0000 7fac86414780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:02.145 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:02.142+0000 7fac86414780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:02.371 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 0 2026-03-08T22:43:02.371 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:43:02.371 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=0 2026-03-08T22:43:02.371 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:43:02.371 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:43:02.371 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:02.371 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:43:02.371 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:43:02.371 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:02.371 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.0 up' 2026-03-08T22:43:02.626 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:02.705 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:02.703+0000 7fac86414780 -1 Falling back to public interface 2026-03-08T22:43:03.627 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:03.627 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:03.627 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:43:03.627 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:43:03.628 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:03.628 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.0 up' 2026-03-08T22:43:03.811 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:03.810+0000 7fac86414780 -1 osd.0 0 log_to_monitors true 2026-03-08T22:43:03.863 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:04.866 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:43:04.866 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:04.866 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:04.866 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 2 2026-03-08T22:43:04.866 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:04.866 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.0 up' 2026-03-08T22:43:05.071 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:05.069+0000 7fac81d5c640 -1 osd.0 0 waiting for initial osdmap 2026-03-08T22:43:05.130 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:06.133 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:43:06.133 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:06.133 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:06.133 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 3 2026-03-08T22:43:06.133 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:06.133 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.0 up' 2026-03-08T22:43:06.384 INFO:tasks.workunit.client.0.vm00.stdout:osd.0 up in weight 1 up_from 5 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6802/1943181371,v1:127.0.0.1:6803/1943181371] [v2:127.0.0.1:6804/1943181371,v1:127.0.0.1:6805/1943181371] exists,up be917770-fb46-494d-a75e-030e3b6ea428 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:57: TEST_stretched_cluster_failover_add_three_osds: for osd in $(seq 0 $(expr $OSDS - 1)) 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:59: TEST_stretched_cluster_failover_add_three_osds: run_osd td/mon-stretch-fail-recovery 1 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=1 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretch-fail-recovery/1 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretch-fail-recovery/1' 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretch-fail-recovery/1/journal' 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretch-fail-recovery' 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:43:06.385 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretch-fail-recovery/$name.log' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretch-fail-recovery/$name.pid' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:43:06.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretch-fail-recovery/1 2026-03-08T22:43:06.387 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:43:06.388 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=c7a8114e-0eb1-4bf6-bbe7-33ca5d4badf9 2026-03-08T22:43:06.388 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd1 c7a8114e-0eb1-4bf6-bbe7-33ca5d4badf9' 2026-03-08T22:43:06.388 INFO:tasks.workunit.client.0.vm00.stdout:add osd1 c7a8114e-0eb1-4bf6-bbe7-33ca5d4badf9 2026-03-08T22:43:06.388 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:43:06.402 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQB6+61pglHkFxAABXigZ1nyFxM8haUkBj2UpA== 2026-03-08T22:43:06.402 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQB6+61pglHkFxAABXigZ1nyFxM8haUkBj2UpA=="}' 2026-03-08T22:43:06.402 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new c7a8114e-0eb1-4bf6-bbe7-33ca5d4badf9 -i td/mon-stretch-fail-recovery/1/new.json 2026-03-08T22:43:06.668 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:43:06.679 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretch-fail-recovery/1/new.json 2026-03-08T22:43:06.680 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 1 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/1 --osd-journal=td/mon-stretch-fail-recovery/1/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQB6+61pglHkFxAABXigZ1nyFxM8haUkBj2UpA== --osd-uuid c7a8114e-0eb1-4bf6-bbe7-33ca5d4badf9 2026-03-08T22:43:06.699 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:06.698+0000 7f8293012780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:06.701 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:06.700+0000 7f8293012780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:06.703 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:06.701+0000 7f8293012780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:06.703 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:06.702+0000 7f8293012780 -1 bdev(0x56363c449c00 td/mon-stretch-fail-recovery/1/block) open stat got: (1) Operation not permitted 2026-03-08T22:43:06.703 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:06.702+0000 7f8293012780 -1 bluestore(td/mon-stretch-fail-recovery/1) _read_fsid unparsable uuid 2026-03-08T22:43:08.886 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretch-fail-recovery/1/keyring 2026-03-08T22:43:08.886 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:43:08.887 INFO:tasks.workunit.client.0.vm00.stdout:adding osd1 key to auth repository 2026-03-08T22:43:08.887 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd1 key to auth repository 2026-03-08T22:43:08.887 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretch-fail-recovery/1/keyring auth add osd.1 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:43:09.219 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.1 2026-03-08T22:43:09.219 INFO:tasks.workunit.client.0.vm00.stdout:start osd.1 2026-03-08T22:43:09.220 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 1 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/1 --osd-journal=td/mon-stretch-fail-recovery/1/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:43:09.220 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:43:09.222 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:43:09.224 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:43:09.243 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:09.240+0000 7f0bdda3f780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:09.251 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:09.250+0000 7f0bdda3f780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:09.255 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:09.252+0000 7f0bdda3f780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:09.481 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 1 2026-03-08T22:43:09.481 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:43:09.481 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=1 2026-03-08T22:43:09.481 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:43:09.481 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:43:09.481 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:09.481 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:43:09.482 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:43:09.482 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:09.482 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.1 up' 2026-03-08T22:43:09.719 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:10.331 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:10.330+0000 7f0bdda3f780 -1 Falling back to public interface 2026-03-08T22:43:10.720 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:10.720 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:10.720 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:43:10.721 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:43:10.721 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:10.721 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.1 up' 2026-03-08T22:43:10.963 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:11.184 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:11.183+0000 7f0bdda3f780 -1 osd.1 0 log_to_monitors true 2026-03-08T22:43:11.966 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:43:11.966 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:11.966 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:11.966 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 2 2026-03-08T22:43:11.966 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:11.966 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.1 up' 2026-03-08T22:43:12.215 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:13.217 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:13.217 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:13.217 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 3 2026-03-08T22:43:13.217 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:43:13.218 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:13.218 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.1 up' 2026-03-08T22:43:13.477 INFO:tasks.workunit.client.0.vm00.stdout:osd.1 up in weight 1 up_from 10 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6810/3938254067,v1:127.0.0.1:6811/3938254067] [v2:127.0.0.1:6812/3938254067,v1:127.0.0.1:6813/3938254067] exists,up c7a8114e-0eb1-4bf6-bbe7-33ca5d4badf9 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:57: TEST_stretched_cluster_failover_add_three_osds: for osd in $(seq 0 $(expr $OSDS - 1)) 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:59: TEST_stretched_cluster_failover_add_three_osds: run_osd td/mon-stretch-fail-recovery 2 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=2 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretch-fail-recovery/2 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretch-fail-recovery/2' 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretch-fail-recovery/2/journal' 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretch-fail-recovery' 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:43:13.478 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:43:13.479 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:43:13.479 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:43:13.479 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:43:13.479 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:13.479 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:13.479 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:43:13.479 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:43:13.479 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:43:13.479 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretch-fail-recovery/$name.log' 2026-03-08T22:43:13.479 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretch-fail-recovery/$name.pid' 2026-03-08T22:43:13.479 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:43:13.480 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:43:13.480 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:43:13.480 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:43:13.480 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:43:13.480 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:43:13.480 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretch-fail-recovery/2 2026-03-08T22:43:13.481 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:43:13.482 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=15ac8828-83c6-419b-88d6-ca70a08e7f02 2026-03-08T22:43:13.482 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd2 15ac8828-83c6-419b-88d6-ca70a08e7f02' 2026-03-08T22:43:13.482 INFO:tasks.workunit.client.0.vm00.stdout:add osd2 15ac8828-83c6-419b-88d6-ca70a08e7f02 2026-03-08T22:43:13.483 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:43:13.496 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQCB+61prVqIHRAAerXKfmVv4SnlygcVSgZ0xQ== 2026-03-08T22:43:13.496 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQCB+61prVqIHRAAerXKfmVv4SnlygcVSgZ0xQ=="}' 2026-03-08T22:43:13.496 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new 15ac8828-83c6-419b-88d6-ca70a08e7f02 -i td/mon-stretch-fail-recovery/2/new.json 2026-03-08T22:43:13.748 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:43:13.758 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretch-fail-recovery/2/new.json 2026-03-08T22:43:13.759 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 2 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/2 --osd-journal=td/mon-stretch-fail-recovery/2/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQCB+61prVqIHRAAerXKfmVv4SnlygcVSgZ0xQ== --osd-uuid 15ac8828-83c6-419b-88d6-ca70a08e7f02 2026-03-08T22:43:13.778 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:13.777+0000 7f366b6c2780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:13.781 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:13.780+0000 7f366b6c2780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:13.782 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:13.781+0000 7f366b6c2780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:13.782 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:13.781+0000 7f366b6c2780 -1 bdev(0x55d5e7e93c00 td/mon-stretch-fail-recovery/2/block) open stat got: (1) Operation not permitted 2026-03-08T22:43:13.783 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:13.781+0000 7f366b6c2780 -1 bluestore(td/mon-stretch-fail-recovery/2) _read_fsid unparsable uuid 2026-03-08T22:43:16.670 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretch-fail-recovery/2/keyring 2026-03-08T22:43:16.670 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:43:16.671 INFO:tasks.workunit.client.0.vm00.stdout:adding osd2 key to auth repository 2026-03-08T22:43:16.671 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd2 key to auth repository 2026-03-08T22:43:16.671 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretch-fail-recovery/2/keyring auth add osd.2 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:43:16.982 INFO:tasks.workunit.client.0.vm00.stdout:start osd.2 2026-03-08T22:43:16.983 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.2 2026-03-08T22:43:16.983 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 2 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/2 --osd-journal=td/mon-stretch-fail-recovery/2/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:43:16.983 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:43:16.986 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:43:16.988 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:43:17.004 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:17.001+0000 7f1dc3a1c780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:17.006 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:17.005+0000 7f1dc3a1c780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:17.009 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:17.007+0000 7f1dc3a1c780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:17.273 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 2 2026-03-08T22:43:17.273 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:43:17.273 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=2 2026-03-08T22:43:17.273 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:43:17.273 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:43:17.273 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:17.273 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:43:17.273 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:43:17.273 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.2 up' 2026-03-08T22:43:17.273 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:17.526 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:17.563 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:17.562+0000 7f1dc3a1c780 -1 Falling back to public interface 2026-03-08T22:43:18.420 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:18.418+0000 7f1dc3a1c780 -1 osd.2 0 log_to_monitors true 2026-03-08T22:43:18.527 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:18.527 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:18.527 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:43:18.527 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:43:18.528 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:18.528 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.2 up' 2026-03-08T22:43:18.807 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:19.808 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:19.808 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:19.808 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:43:19.808 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 2 2026-03-08T22:43:19.809 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:19.809 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.2 up' 2026-03-08T22:43:20.062 INFO:tasks.workunit.client.0.vm00.stdout:osd.2 up in weight 1 up_from 15 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6818/1928092983,v1:127.0.0.1:6819/1928092983] [v2:127.0.0.1:6820/1928092983,v1:127.0.0.1:6821/1928092983] exists,up 15ac8828-83c6-419b-88d6-ca70a08e7f02 2026-03-08T22:43:20.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:43:20.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:43:20.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:43:20.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:57: TEST_stretched_cluster_failover_add_three_osds: for osd in $(seq 0 $(expr $OSDS - 1)) 2026-03-08T22:43:20.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:59: TEST_stretched_cluster_failover_add_three_osds: run_osd td/mon-stretch-fail-recovery 3 2026-03-08T22:43:20.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:43:20.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:43:20.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=3 2026-03-08T22:43:20.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:43:20.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretch-fail-recovery/3 2026-03-08T22:43:20.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretch-fail-recovery/3' 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretch-fail-recovery/3/journal' 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretch-fail-recovery' 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:43:20.063 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:20.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:20.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:43:20.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:43:20.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:43:20.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretch-fail-recovery/$name.log' 2026-03-08T22:43:20.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretch-fail-recovery/$name.pid' 2026-03-08T22:43:20.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:43:20.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:43:20.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:43:20.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:43:20.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:43:20.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:43:20.064 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretch-fail-recovery/3 2026-03-08T22:43:20.065 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:43:20.066 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=0199cfd5-0609-4b53-93d2-c548dc4dc140 2026-03-08T22:43:20.066 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd3 0199cfd5-0609-4b53-93d2-c548dc4dc140' 2026-03-08T22:43:20.066 INFO:tasks.workunit.client.0.vm00.stdout:add osd3 0199cfd5-0609-4b53-93d2-c548dc4dc140 2026-03-08T22:43:20.067 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:43:20.080 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQCI+61pW6vEBBAAra6rmDNzN/BYQ6K3nbZ08g== 2026-03-08T22:43:20.080 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQCI+61pW6vEBBAAra6rmDNzN/BYQ6K3nbZ08g=="}' 2026-03-08T22:43:20.080 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new 0199cfd5-0609-4b53-93d2-c548dc4dc140 -i td/mon-stretch-fail-recovery/3/new.json 2026-03-08T22:43:20.365 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:43:20.375 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretch-fail-recovery/3/new.json 2026-03-08T22:43:20.376 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 3 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/3 --osd-journal=td/mon-stretch-fail-recovery/3/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQCI+61pW6vEBBAAra6rmDNzN/BYQ6K3nbZ08g== --osd-uuid 0199cfd5-0609-4b53-93d2-c548dc4dc140 2026-03-08T22:43:20.396 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:20.395+0000 7f4558c0e780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:20.398 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:20.397+0000 7f4558c0e780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:20.400 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:20.398+0000 7f4558c0e780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:20.400 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:20.399+0000 7f4558c0e780 -1 bdev(0x558190379c00 td/mon-stretch-fail-recovery/3/block) open stat got: (1) Operation not permitted 2026-03-08T22:43:20.400 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:20.399+0000 7f4558c0e780 -1 bluestore(td/mon-stretch-fail-recovery/3) _read_fsid unparsable uuid 2026-03-08T22:43:22.507 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretch-fail-recovery/3/keyring 2026-03-08T22:43:22.508 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:43:22.508 INFO:tasks.workunit.client.0.vm00.stdout:adding osd3 key to auth repository 2026-03-08T22:43:22.508 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd3 key to auth repository 2026-03-08T22:43:22.508 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretch-fail-recovery/3/keyring auth add osd.3 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:43:22.826 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.3 2026-03-08T22:43:22.826 INFO:tasks.workunit.client.0.vm00.stdout:start osd.3 2026-03-08T22:43:22.826 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 3 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/3 --osd-journal=td/mon-stretch-fail-recovery/3/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:43:22.827 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:43:22.827 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:43:22.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:43:22.844 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:22.843+0000 7f0fb6ced780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:22.846 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:22.845+0000 7f0fb6ced780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:22.847 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:22.846+0000 7f0fb6ced780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:23.071 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 3 2026-03-08T22:43:23.071 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:43:23.071 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=3 2026-03-08T22:43:23.071 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:43:23.072 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:43:23.072 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:43:23.072 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:23.072 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:43:23.072 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:23.072 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.3 up' 2026-03-08T22:43:23.322 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:24.183 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:24.182+0000 7f0fb6ced780 -1 Falling back to public interface 2026-03-08T22:43:24.323 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:24.323 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:24.323 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:43:24.323 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:43:24.323 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:24.323 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.3 up' 2026-03-08T22:43:24.569 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:25.040 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:25.039+0000 7f0fb6ced780 -1 osd.3 0 log_to_monitors true 2026-03-08T22:43:25.570 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:25.571 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:25.571 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 2 2026-03-08T22:43:25.571 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:43:25.572 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:25.572 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.3 up' 2026-03-08T22:43:26.104 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:27.106 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:43:27.106 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:27.106 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:27.106 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 3 2026-03-08T22:43:27.106 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:27.106 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.3 up' 2026-03-08T22:43:27.405 INFO:tasks.workunit.client.0.vm00.stdout:osd.3 up in weight 1 up_from 20 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6826/2865482948,v1:127.0.0.1:6827/2865482948] [v2:127.0.0.1:6828/2865482948,v1:127.0.0.1:6829/2865482948] exists,up 0199cfd5-0609-4b53-93d2-c548dc4dc140 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:57: TEST_stretched_cluster_failover_add_three_osds: for osd in $(seq 0 $(expr $OSDS - 1)) 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:59: TEST_stretched_cluster_failover_add_three_osds: run_osd td/mon-stretch-fail-recovery 4 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=4 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretch-fail-recovery/4 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretch-fail-recovery/4' 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretch-fail-recovery/4/journal' 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretch-fail-recovery' 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:43:27.406 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretch-fail-recovery/$name.log' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretch-fail-recovery/$name.pid' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:43:27.407 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretch-fail-recovery/4 2026-03-08T22:43:27.408 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:43:27.408 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=9ab78ae8-ad2c-423d-b659-2cf878a4dd15 2026-03-08T22:43:27.409 INFO:tasks.workunit.client.0.vm00.stdout:add osd4 9ab78ae8-ad2c-423d-b659-2cf878a4dd15 2026-03-08T22:43:27.409 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd4 9ab78ae8-ad2c-423d-b659-2cf878a4dd15' 2026-03-08T22:43:27.409 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:43:27.422 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQCP+61pQIggGRAADCrKWMDM3Z+FmFOa5eIXEA== 2026-03-08T22:43:27.422 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQCP+61pQIggGRAADCrKWMDM3Z+FmFOa5eIXEA=="}' 2026-03-08T22:43:27.422 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new 9ab78ae8-ad2c-423d-b659-2cf878a4dd15 -i td/mon-stretch-fail-recovery/4/new.json 2026-03-08T22:43:27.683 INFO:tasks.workunit.client.0.vm00.stdout:4 2026-03-08T22:43:27.693 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretch-fail-recovery/4/new.json 2026-03-08T22:43:27.694 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 4 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/4 --osd-journal=td/mon-stretch-fail-recovery/4/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQCP+61pQIggGRAADCrKWMDM3Z+FmFOa5eIXEA== --osd-uuid 9ab78ae8-ad2c-423d-b659-2cf878a4dd15 2026-03-08T22:43:27.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:27.713+0000 7f914d9f9780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:27.717 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:27.716+0000 7f914d9f9780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:27.719 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:27.717+0000 7f914d9f9780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:27.719 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:27.718+0000 7f914d9f9780 -1 bdev(0x5630587bfc00 td/mon-stretch-fail-recovery/4/block) open stat got: (1) Operation not permitted 2026-03-08T22:43:27.719 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:27.718+0000 7f914d9f9780 -1 bluestore(td/mon-stretch-fail-recovery/4) _read_fsid unparsable uuid 2026-03-08T22:43:30.109 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretch-fail-recovery/4/keyring 2026-03-08T22:43:30.109 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:43:30.110 INFO:tasks.workunit.client.0.vm00.stdout:adding osd4 key to auth repository 2026-03-08T22:43:30.110 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd4 key to auth repository 2026-03-08T22:43:30.111 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretch-fail-recovery/4/keyring auth add osd.4 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:43:30.420 INFO:tasks.workunit.client.0.vm00.stdout:start osd.4 2026-03-08T22:43:30.420 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.4 2026-03-08T22:43:30.420 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 4 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/4 --osd-journal=td/mon-stretch-fail-recovery/4/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:43:30.420 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:43:30.421 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:43:30.423 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:43:30.437 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:30.436+0000 7f811602c780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:30.439 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:30.438+0000 7f811602c780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:30.441 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:30.439+0000 7f811602c780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:30.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 4 2026-03-08T22:43:30.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:43:30.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=4 2026-03-08T22:43:30.657 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:43:30.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:43:30.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:43:30.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:30.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:43:30.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:30.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.4 up' 2026-03-08T22:43:30.885 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:31.246 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:31.244+0000 7f811602c780 -1 Falling back to public interface 2026-03-08T22:43:31.886 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:31.886 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:31.887 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:43:31.887 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:43:31.887 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:31.887 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.4 up' 2026-03-08T22:43:32.124 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:32.550 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:32.549+0000 7f811602c780 -1 osd.4 0 log_to_monitors true 2026-03-08T22:43:33.126 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:33.126 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:33.126 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:43:33.126 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 2 2026-03-08T22:43:33.126 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.4 up' 2026-03-08T22:43:33.128 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:33.409 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:34.410 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:34.410 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:34.411 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:43:34.411 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 3 2026-03-08T22:43:34.411 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:34.411 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.4 up' 2026-03-08T22:43:34.647 INFO:tasks.workunit.client.0.vm00.stdout:osd.4 up in weight 1 up_from 25 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6834/3740935344,v1:127.0.0.1:6835/3740935344] [v2:127.0.0.1:6836/3740935344,v1:127.0.0.1:6837/3740935344] exists,up 9ab78ae8-ad2c-423d-b659-2cf878a4dd15 2026-03-08T22:43:34.647 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:43:34.647 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:43:34.647 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:43:34.647 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:57: TEST_stretched_cluster_failover_add_three_osds: for osd in $(seq 0 $(expr $OSDS - 1)) 2026-03-08T22:43:34.647 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:59: TEST_stretched_cluster_failover_add_three_osds: run_osd td/mon-stretch-fail-recovery 5 2026-03-08T22:43:34.647 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:43:34.647 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:43:34.647 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=5 2026-03-08T22:43:34.647 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:43:34.647 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretch-fail-recovery/5 2026-03-08T22:43:34.647 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:43:34.647 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:43:34.648 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:43:34.648 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:43:34.648 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretch-fail-recovery/5' 2026-03-08T22:43:34.648 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretch-fail-recovery/5/journal' 2026-03-08T22:43:34.648 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:43:34.648 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:43:34.648 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretch-fail-recovery' 2026-03-08T22:43:34.648 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:43:34.648 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:43:34.648 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:43:34.649 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:43:34.649 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:43:34.649 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:43:34.649 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:34.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:34.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:43:34.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:43:34.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:43:34.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretch-fail-recovery/$name.log' 2026-03-08T22:43:34.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretch-fail-recovery/$name.pid' 2026-03-08T22:43:34.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:43:34.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:43:34.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:43:34.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:43:34.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:43:34.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:43:34.650 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretch-fail-recovery/5 2026-03-08T22:43:34.651 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:43:34.652 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=faa4d5ea-f244-44ed-9fcb-ab29cf5f5e47 2026-03-08T22:43:34.652 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd5 faa4d5ea-f244-44ed-9fcb-ab29cf5f5e47' 2026-03-08T22:43:34.652 INFO:tasks.workunit.client.0.vm00.stdout:add osd5 faa4d5ea-f244-44ed-9fcb-ab29cf5f5e47 2026-03-08T22:43:34.653 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:43:34.665 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQCW+61pQ06oJxAAykP6gds9U5wOb87AY8ZfdQ== 2026-03-08T22:43:34.665 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQCW+61pQ06oJxAAykP6gds9U5wOb87AY8ZfdQ=="}' 2026-03-08T22:43:34.665 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new faa4d5ea-f244-44ed-9fcb-ab29cf5f5e47 -i td/mon-stretch-fail-recovery/5/new.json 2026-03-08T22:43:34.928 INFO:tasks.workunit.client.0.vm00.stdout:5 2026-03-08T22:43:34.938 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretch-fail-recovery/5/new.json 2026-03-08T22:43:34.939 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 5 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/5 --osd-journal=td/mon-stretch-fail-recovery/5/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQCW+61pQ06oJxAAykP6gds9U5wOb87AY8ZfdQ== --osd-uuid faa4d5ea-f244-44ed-9fcb-ab29cf5f5e47 2026-03-08T22:43:34.957 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:34.956+0000 7f5145411780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:34.959 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:34.959+0000 7f5145411780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:34.961 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:34.960+0000 7f5145411780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:34.961 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:34.960+0000 7f5145411780 -1 bdev(0x55e3acd63c00 td/mon-stretch-fail-recovery/5/block) open stat got: (1) Operation not permitted 2026-03-08T22:43:34.961 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:34.960+0000 7f5145411780 -1 bluestore(td/mon-stretch-fail-recovery/5) _read_fsid unparsable uuid 2026-03-08T22:43:37.338 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretch-fail-recovery/5/keyring 2026-03-08T22:43:37.338 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:43:37.339 INFO:tasks.workunit.client.0.vm00.stdout:adding osd5 key to auth repository 2026-03-08T22:43:37.339 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd5 key to auth repository 2026-03-08T22:43:37.339 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretch-fail-recovery/5/keyring auth add osd.5 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:43:37.667 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.5 2026-03-08T22:43:37.667 INFO:tasks.workunit.client.0.vm00.stdout:start osd.5 2026-03-08T22:43:37.667 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 5 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/5 --osd-journal=td/mon-stretch-fail-recovery/5/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:43:37.667 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:43:37.669 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:43:37.670 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:43:37.688 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:37.685+0000 7fa0e2016780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:37.693 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:37.692+0000 7fa0e2016780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:37.694 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:37.693+0000 7fa0e2016780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:37.899 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 5 2026-03-08T22:43:37.899 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:43:37.899 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=5 2026-03-08T22:43:37.899 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:43:37.899 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:43:37.899 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:37.899 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:43:37.899 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:43:37.900 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:37.900 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.5 up' 2026-03-08T22:43:38.126 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:38.500 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:38.499+0000 7fa0e2016780 -1 Falling back to public interface 2026-03-08T22:43:39.129 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:43:39.129 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:39.129 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:39.129 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:43:39.129 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:39.129 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.5 up' 2026-03-08T22:43:39.398 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:39.397+0000 7fa0e2016780 -1 osd.5 0 log_to_monitors true 2026-03-08T22:43:39.437 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:40.440 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:43:40.440 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:40.440 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:40.440 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 2 2026-03-08T22:43:40.440 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:40.440 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.5 up' 2026-03-08T22:43:40.814 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:41.383 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:41.382+0000 7fa0dd112640 -1 osd.5 0 waiting for initial osdmap 2026-03-08T22:43:41.816 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:43:41.816 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:41.817 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:41.817 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 3 2026-03-08T22:43:41.817 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:41.817 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.5 up' 2026-03-08T22:43:42.058 INFO:tasks.workunit.client.0.vm00.stdout:osd.5 up in weight 1 up_from 30 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6842/2549921091,v1:127.0.0.1:6843/2549921091] [v2:127.0.0.1:6844/2549921091,v1:127.0.0.1:6845/2549921091] exists,up faa4d5ea-f244-44ed-9fcb-ab29cf5f5e47 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:57: TEST_stretched_cluster_failover_add_three_osds: for osd in $(seq 0 $(expr $OSDS - 1)) 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:59: TEST_stretched_cluster_failover_add_three_osds: run_osd td/mon-stretch-fail-recovery 6 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=6 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretch-fail-recovery/6 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretch-fail-recovery/6' 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretch-fail-recovery/6/journal' 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretch-fail-recovery' 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:43:42.059 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretch-fail-recovery/$name.log' 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretch-fail-recovery/$name.pid' 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:43:42.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretch-fail-recovery/6 2026-03-08T22:43:42.061 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:43:42.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=76cad42a-587b-48a3-9a1c-ab7579ec7f38 2026-03-08T22:43:42.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd6 76cad42a-587b-48a3-9a1c-ab7579ec7f38' 2026-03-08T22:43:42.062 INFO:tasks.workunit.client.0.vm00.stdout:add osd6 76cad42a-587b-48a3-9a1c-ab7579ec7f38 2026-03-08T22:43:42.063 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:43:42.075 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQCe+61pTMt2BBAArOz3zpq7zjuryOESHBnOqQ== 2026-03-08T22:43:42.075 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQCe+61pTMt2BBAArOz3zpq7zjuryOESHBnOqQ=="}' 2026-03-08T22:43:42.075 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new 76cad42a-587b-48a3-9a1c-ab7579ec7f38 -i td/mon-stretch-fail-recovery/6/new.json 2026-03-08T22:43:42.319 INFO:tasks.workunit.client.0.vm00.stdout:6 2026-03-08T22:43:42.329 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretch-fail-recovery/6/new.json 2026-03-08T22:43:42.330 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 6 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/6 --osd-journal=td/mon-stretch-fail-recovery/6/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQCe+61pTMt2BBAArOz3zpq7zjuryOESHBnOqQ== --osd-uuid 76cad42a-587b-48a3-9a1c-ab7579ec7f38 2026-03-08T22:43:42.349 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:42.348+0000 7f5071238780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:42.351 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:42.351+0000 7f5071238780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:42.353 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:42.352+0000 7f5071238780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:42.353 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:42.352+0000 7f5071238780 -1 bdev(0x55bc38661c00 td/mon-stretch-fail-recovery/6/block) open stat got: (1) Operation not permitted 2026-03-08T22:43:42.353 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:42.352+0000 7f5071238780 -1 bluestore(td/mon-stretch-fail-recovery/6) _read_fsid unparsable uuid 2026-03-08T22:43:44.476 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretch-fail-recovery/6/keyring 2026-03-08T22:43:44.476 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:43:44.477 INFO:tasks.workunit.client.0.vm00.stdout:adding osd6 key to auth repository 2026-03-08T22:43:44.477 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd6 key to auth repository 2026-03-08T22:43:44.477 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretch-fail-recovery/6/keyring auth add osd.6 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:43:44.776 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.6 2026-03-08T22:43:44.776 INFO:tasks.workunit.client.0.vm00.stdout:start osd.6 2026-03-08T22:43:44.776 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 6 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/6 --osd-journal=td/mon-stretch-fail-recovery/6/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:43:44.776 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:43:44.777 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:43:44.779 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:43:44.796 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:44.793+0000 7efe76882780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:44.797 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:44.796+0000 7efe76882780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:44.800 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:44.798+0000 7efe76882780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:45.865 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:45.864+0000 7efe76882780 -1 Falling back to public interface 2026-03-08T22:43:46.719 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:46.718+0000 7efe76882780 -1 osd.6 0 log_to_monitors true 2026-03-08T22:43:48.037 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 6 2026-03-08T22:43:48.037 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:43:48.037 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=6 2026-03-08T22:43:48.037 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:43:48.037 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:43:48.037 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:43:48.038 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:48.038 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:43:48.038 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:48.038 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.6 up' 2026-03-08T22:43:48.364 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:49.365 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:49.365 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:49.366 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:43:49.366 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:43:49.366 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:49.367 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.6 up' 2026-03-08T22:43:49.601 INFO:tasks.workunit.client.0.vm00.stdout:osd.6 up in weight 1 up_from 35 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6850/198952882,v1:127.0.0.1:6851/198952882] [v2:127.0.0.1:6852/198952882,v1:127.0.0.1:6853/198952882] exists,up 76cad42a-587b-48a3-9a1c-ab7579ec7f38 2026-03-08T22:43:49.603 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:43:49.607 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:43:49.607 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:43:49.607 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:57: TEST_stretched_cluster_failover_add_three_osds: for osd in $(seq 0 $(expr $OSDS - 1)) 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:59: TEST_stretched_cluster_failover_add_three_osds: run_osd td/mon-stretch-fail-recovery 7 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=7 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretch-fail-recovery/7 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretch-fail-recovery/7' 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretch-fail-recovery/7/journal' 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretch-fail-recovery' 2026-03-08T22:43:49.608 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:43:49.609 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:43:49.609 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:43:49.609 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:43:49.609 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:43:49.609 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:43:49.609 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:49.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:43:49.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:43:49.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:43:49.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:43:49.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretch-fail-recovery/$name.log' 2026-03-08T22:43:49.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretch-fail-recovery/$name.pid' 2026-03-08T22:43:49.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:43:49.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:43:49.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:43:49.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:43:49.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:43:49.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:43:49.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretch-fail-recovery/7 2026-03-08T22:43:49.611 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:43:49.612 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=99f81493-c953-478e-9604-41a0e62b0a5a 2026-03-08T22:43:49.612 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd7 99f81493-c953-478e-9604-41a0e62b0a5a' 2026-03-08T22:43:49.612 INFO:tasks.workunit.client.0.vm00.stdout:add osd7 99f81493-c953-478e-9604-41a0e62b0a5a 2026-03-08T22:43:49.613 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:43:49.625 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQCl+61pSVQ4JRAAo0HBHptP9SbsOsEmr1tEaw== 2026-03-08T22:43:49.625 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQCl+61pSVQ4JRAAo0HBHptP9SbsOsEmr1tEaw=="}' 2026-03-08T22:43:49.625 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new 99f81493-c953-478e-9604-41a0e62b0a5a -i td/mon-stretch-fail-recovery/7/new.json 2026-03-08T22:43:49.882 INFO:tasks.workunit.client.0.vm00.stdout:7 2026-03-08T22:43:49.894 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretch-fail-recovery/7/new.json 2026-03-08T22:43:49.895 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 7 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/7 --osd-journal=td/mon-stretch-fail-recovery/7/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQCl+61pSVQ4JRAAo0HBHptP9SbsOsEmr1tEaw== --osd-uuid 99f81493-c953-478e-9604-41a0e62b0a5a 2026-03-08T22:43:49.923 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:49.922+0000 7fb23cacc780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:49.932 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:49.930+0000 7fb23cacc780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:49.933 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:49.932+0000 7fb23cacc780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:49.933 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:49.932+0000 7fb23cacc780 -1 bdev(0x55d6f3838800 td/mon-stretch-fail-recovery/7/block) open stat got: (1) Operation not permitted 2026-03-08T22:43:49.933 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:49.932+0000 7fb23cacc780 -1 bluestore(td/mon-stretch-fail-recovery/7) _read_fsid unparsable uuid 2026-03-08T22:43:52.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretch-fail-recovery/7/keyring 2026-03-08T22:43:52.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:43:52.855 INFO:tasks.workunit.client.0.vm00.stdout:adding osd7 key to auth repository 2026-03-08T22:43:52.855 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd7 key to auth repository 2026-03-08T22:43:52.855 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretch-fail-recovery/7/keyring auth add osd.7 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:43:53.198 INFO:tasks.workunit.client.0.vm00.stdout:start osd.7 2026-03-08T22:43:53.199 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.7 2026-03-08T22:43:53.199 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 7 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/7 --osd-journal=td/mon-stretch-fail-recovery/7/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:43:53.199 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:43:53.200 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:43:53.204 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:43:53.220 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:53.218+0000 7fe8b7ed9780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:53.224 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:53.224+0000 7fe8b7ed9780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:53.227 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:53.226+0000 7fe8b7ed9780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:43:53.445 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 7 2026-03-08T22:43:53.445 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:43:53.445 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=7 2026-03-08T22:43:53.445 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:43:53.445 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:43:53.445 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:43:53.445 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:53.445 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:43:53.445 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:53.445 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.7 up' 2026-03-08T22:43:53.717 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:54.718 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:54.718 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:54.718 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:43:54.718 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:43:54.719 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:54.719 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.7 up' 2026-03-08T22:43:54.807 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:54.806+0000 7fe8b7ed9780 -1 Falling back to public interface 2026-03-08T22:43:54.956 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:55.715 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:43:55.714+0000 7fe8b7ed9780 -1 osd.7 0 log_to_monitors true 2026-03-08T22:43:55.957 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:55.957 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:55.957 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:43:55.957 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 2 2026-03-08T22:43:55.957 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.7 up' 2026-03-08T22:43:55.959 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:56.285 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:43:57.286 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:43:57.287 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:43:57.287 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:43:57.287 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 3 2026-03-08T22:43:57.288 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:43:57.288 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.7 up' 2026-03-08T22:43:57.577 INFO:tasks.workunit.client.0.vm00.stdout:osd.7 up in weight 1 up_from 40 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6858/3747185512,v1:127.0.0.1:6859/3747185512] [v2:127.0.0.1:6860/3747185512,v1:127.0.0.1:6861/3747185512] exists,up 99f81493-c953-478e-9604-41a0e62b0a5a 2026-03-08T22:43:57.577 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:43:57.577 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:43:57.577 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:43:57.577 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:62: TEST_stretched_cluster_failover_add_three_osds: for zone in iris pze 2026-03-08T22:43:57.577 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:64: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush add-bucket iris zone 2026-03-08T22:43:57.832 INFO:tasks.workunit.client.0.vm00.stderr:bucket 'iris' already exists 2026-03-08T22:43:57.841 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:65: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move iris root=default 2026-03-08T22:43:58.161 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id -5 name 'iris' to location {root=default} in crush map 2026-03-08T22:43:58.171 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:62: TEST_stretched_cluster_failover_add_three_osds: for zone in iris pze 2026-03-08T22:43:58.171 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:64: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush add-bucket pze zone 2026-03-08T22:43:58.496 INFO:tasks.workunit.client.0.vm00.stderr:bucket 'pze' already exists 2026-03-08T22:43:58.509 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:65: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move pze root=default 2026-03-08T22:43:58.831 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id -7 name 'pze' to location {root=default} in crush map 2026-03-08T22:43:58.843 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:69: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush add-bucket node-2 host 2026-03-08T22:44:02.178 INFO:tasks.workunit.client.0.vm00.stderr:bucket 'node-2' already exists 2026-03-08T22:44:02.188 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:70: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush add-bucket node-3 host 2026-03-08T22:44:02.499 INFO:tasks.workunit.client.0.vm00.stderr:bucket 'node-3' already exists 2026-03-08T22:44:02.509 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:71: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush add-bucket node-4 host 2026-03-08T22:44:02.812 INFO:tasks.workunit.client.0.vm00.stderr:bucket 'node-4' already exists 2026-03-08T22:44:02.822 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:72: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush add-bucket node-5 host 2026-03-08T22:44:03.336 INFO:tasks.workunit.client.0.vm00.stderr:bucket 'node-5' already exists 2026-03-08T22:44:03.347 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:74: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move node-2 zone=iris 2026-03-08T22:44:03.649 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id -9 name 'node-2' to location {zone=iris} in crush map 2026-03-08T22:44:03.661 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:75: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move node-3 zone=iris 2026-03-08T22:44:03.950 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id -10 name 'node-3' to location {zone=iris} in crush map 2026-03-08T22:44:03.963 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:76: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move node-4 zone=pze 2026-03-08T22:44:04.270 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id -11 name 'node-4' to location {zone=pze} in crush map 2026-03-08T22:44:04.281 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:77: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move node-5 zone=pze 2026-03-08T22:44:04.593 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id -12 name 'node-5' to location {zone=pze} in crush map 2026-03-08T22:44:04.606 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:79: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move osd.0 host=node-2 2026-03-08T22:44:04.937 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id 0 name 'osd.0' to location {host=node-2} in crush map 2026-03-08T22:44:04.947 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:80: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move osd.1 host=node-2 2026-03-08T22:44:05.321 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id 1 name 'osd.1' to location {host=node-2} in crush map 2026-03-08T22:44:05.333 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:81: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move osd.2 host=node-3 2026-03-08T22:44:05.654 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id 2 name 'osd.2' to location {host=node-3} in crush map 2026-03-08T22:44:05.666 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:82: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move osd.3 host=node-3 2026-03-08T22:44:05.986 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id 3 name 'osd.3' to location {host=node-3} in crush map 2026-03-08T22:44:05.998 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:83: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move osd.4 host=node-4 2026-03-08T22:44:06.308 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id 4 name 'osd.4' to location {host=node-4} in crush map 2026-03-08T22:44:06.319 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:84: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move osd.5 host=node-4 2026-03-08T22:44:06.626 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id 5 name 'osd.5' to location {host=node-4} in crush map 2026-03-08T22:44:06.636 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:85: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move osd.6 host=node-5 2026-03-08T22:44:09.947 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id 6 name 'osd.6' to location {host=node-5} in crush map 2026-03-08T22:44:09.957 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:86: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush move osd.7 host=node-5 2026-03-08T22:44:10.255 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id 7 name 'osd.7' to location {host=node-5} in crush map 2026-03-08T22:44:10.266 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:88: TEST_stretched_cluster_failover_add_three_osds: ceph mon set_location a zone=iris host=node-2 2026-03-08T22:44:10.676 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:89: TEST_stretched_cluster_failover_add_three_osds: ceph mon set_location b zone=iris host=node-3 2026-03-08T22:44:17.006 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:90: TEST_stretched_cluster_failover_add_three_osds: ceph mon set_location c zone=pze host=node-4 2026-03-08T22:44:22.336 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:91: TEST_stretched_cluster_failover_add_three_osds: ceph mon set_location d zone=pze host=node-5 2026-03-08T22:44:22.748 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:93: TEST_stretched_cluster_failover_add_three_osds: hostname -s 2026-03-08T22:44:22.748 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:93: TEST_stretched_cluster_failover_add_three_osds: hostname=vm00 2026-03-08T22:44:22.748 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:94: TEST_stretched_cluster_failover_add_three_osds: ceph osd crush remove vm00 2026-03-08T22:44:23.093 INFO:tasks.workunit.client.0.vm00.stderr:device 'vm00' does not appear in the crush map 2026-03-08T22:44:23.104 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:95: TEST_stretched_cluster_failover_add_three_osds: ceph osd getcrushmap 2026-03-08T22:44:23.358 INFO:tasks.workunit.client.0.vm00.stderr:38 2026-03-08T22:44:23.369 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:96: TEST_stretched_cluster_failover_add_three_osds: crushtool --decompile crushmap 2026-03-08T22:44:23.383 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:97: TEST_stretched_cluster_failover_add_three_osds: sed 's/^# end crush map$//' crushmap.txt 2026-03-08T22:44:23.385 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:98: TEST_stretched_cluster_failover_add_three_osds: cat 2026-03-08T22:44:23.386 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:115: TEST_stretched_cluster_failover_add_three_osds: crushtool --compile crushmap_modified.txt -o crushmap.bin 2026-03-08T22:44:23.398 INFO:tasks.workunit.client.0.vm00.stderr:WARNING: min_size is no longer supported, ignoring 2026-03-08T22:44:23.398 INFO:tasks.workunit.client.0.vm00.stderr:WARNING: max_size is no longer supported, ignoring 2026-03-08T22:44:23.399 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:116: TEST_stretched_cluster_failover_add_three_osds: ceph osd setcrushmap -i crushmap.bin 2026-03-08T22:44:23.885 INFO:tasks.workunit.client.0.vm00.stderr:40 2026-03-08T22:44:23.900 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:117: TEST_stretched_cluster_failover_add_three_osds: local stretched_poolname=stretched_rbdpool 2026-03-08T22:44:23.900 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:118: TEST_stretched_cluster_failover_add_three_osds: ceph osd pool create stretched_rbdpool 32 32 stretch_rule 2026-03-08T22:44:24.292 INFO:tasks.workunit.client.0.vm00.stderr:pool 'stretched_rbdpool' already exists 2026-03-08T22:44:24.303 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:119: TEST_stretched_cluster_failover_add_three_osds: ceph osd pool set stretched_rbdpool size 4 2026-03-08T22:44:24.896 INFO:tasks.workunit.client.0.vm00.stderr:set pool 1 size to 4 2026-03-08T22:44:24.915 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:121: TEST_stretched_cluster_failover_add_three_osds: sleep 3 2026-03-08T22:44:27.916 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:123: TEST_stretched_cluster_failover_add_three_osds: ceph mon set_location e zone=arbiter host=node-1 2026-03-08T22:44:31.316 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:124: TEST_stretched_cluster_failover_add_three_osds: ceph mon enable_stretch_mode e stretch_rule zone 2026-03-08T22:44:37.801 INFO:tasks.workunit.client.0.vm00.stderr:Second attempt of previously successful command failed with EINVAL: stretch mode is already engaged 2026-03-08T22:44:37.801 INFO:tasks.workunit.client.0.vm00.stderr:stretch mode is already engaged 2026-03-08T22:44:37.811 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:126: TEST_stretched_cluster_failover_add_three_osds: kill_daemons td/mon-stretch-fail-recovery KILL mon.c 2026-03-08T22:44:37.812 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:44:37.812 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:44:37.812 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:44:37.812 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:44:37.812 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:44:37.933 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:44:37.933 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:127: TEST_stretched_cluster_failover_add_three_osds: kill_daemons td/mon-stretch-fail-recovery KILL mon.d 2026-03-08T22:44:37.933 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:44:37.933 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:44:37.933 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:44:37.933 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:44:37.933 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:44:38.048 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:44:38.055 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:129: TEST_stretched_cluster_failover_add_three_osds: kill_daemons td/mon-stretch-fail-recovery KILL osd.4 2026-03-08T22:44:38.056 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:44:38.057 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:44:38.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:44:38.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:44:38.060 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:44:38.171 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:44:38.171 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:130: TEST_stretched_cluster_failover_add_three_osds: kill_daemons td/mon-stretch-fail-recovery KILL osd.5 2026-03-08T22:44:38.172 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:44:38.172 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:44:38.172 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:44:38.172 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:44:38.172 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:44:38.277 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:44:38.277 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:131: TEST_stretched_cluster_failover_add_three_osds: kill_daemons td/mon-stretch-fail-recovery KILL osd.6 2026-03-08T22:44:38.277 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:44:38.277 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:44:38.277 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:44:38.277 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:44:38.277 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:44:38.382 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:44:38.383 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:132: TEST_stretched_cluster_failover_add_three_osds: kill_daemons td/mon-stretch-fail-recovery KILL osd.7 2026-03-08T22:44:38.383 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:44:38.383 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:44:38.383 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:44:38.383 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:44:38.383 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:44:38.488 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:44:38.489 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:134: TEST_stretched_cluster_failover_add_three_osds: ceph -s 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: cluster: 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: id: e6beb2c8-8f22-428a-b327-33ee467015ad 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: health: HEALTH_WARN 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: 1 pool(s) do not have an application enabled 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: services: 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: mon: 5 daemons, quorum a,b,c,d,e (age 4s) 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: mgr: y(active, since 102s), standbys: x, z 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: osd: 8 osds: 8 up (since 44s), 8 in (since 52s) 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: data: 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: pools: 1 pools, 32 pgs 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: objects: 0 objects, 0 B 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: usage: 215 MiB used, 800 GiB / 800 GiB avail 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: pgs: 32 active+clean 2026-03-08T22:44:41.812 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-08T22:44:41.827 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:136: TEST_stretched_cluster_failover_add_three_osds: sleep 3 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:138: TEST_stretched_cluster_failover_add_three_osds: run_osd td/mon-stretch-fail-recovery 8 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=8 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretch-fail-recovery/8 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretch-fail-recovery/8' 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretch-fail-recovery/8/journal' 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretch-fail-recovery' 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:44:44.829 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:44:44.830 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:44:44.830 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:44:44.830 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:44:44.830 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:44:44.831 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:44:44.831 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:44:44.831 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:44:44.831 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:44:44.831 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretch-fail-recovery/$name.log' 2026-03-08T22:44:44.831 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretch-fail-recovery/$name.pid' 2026-03-08T22:44:44.831 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:44:44.831 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:44:44.831 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:44:44.831 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:44:44.831 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:44:44.831 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:44:44.831 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretch-fail-recovery/8 2026-03-08T22:44:44.833 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:44:44.833 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=a5742ca9-b1e5-47d8-b0fb-68a72ad3ffef 2026-03-08T22:44:44.834 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd8 a5742ca9-b1e5-47d8-b0fb-68a72ad3ffef' 2026-03-08T22:44:44.834 INFO:tasks.workunit.client.0.vm00.stdout:add osd8 a5742ca9-b1e5-47d8-b0fb-68a72ad3ffef 2026-03-08T22:44:44.834 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:44:44.847 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQDc+61p2f10MhAAcFY0KcnYNP/SmIcqf0bBxw== 2026-03-08T22:44:44.847 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQDc+61p2f10MhAAcFY0KcnYNP/SmIcqf0bBxw=="}' 2026-03-08T22:44:44.847 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new a5742ca9-b1e5-47d8-b0fb-68a72ad3ffef -i td/mon-stretch-fail-recovery/8/new.json 2026-03-08T22:45:06.094 INFO:tasks.workunit.client.0.vm00.stdout:8 2026-03-08T22:45:06.104 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretch-fail-recovery/8/new.json 2026-03-08T22:45:06.106 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 8 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/8 --osd-journal=td/mon-stretch-fail-recovery/8/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQDc+61p2f10MhAAcFY0KcnYNP/SmIcqf0bBxw== --osd-uuid a5742ca9-b1e5-47d8-b0fb-68a72ad3ffef 2026-03-08T22:45:06.127 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:06.125+0000 7f48e69f9780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:09.129 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:09.128+0000 7f48e69f9780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:09.130 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:09.130+0000 7f48e69f9780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:09.131 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:09.130+0000 7f48e69f9780 -1 bdev(0x55f6112acc00 td/mon-stretch-fail-recovery/8/block) open stat got: (1) Operation not permitted 2026-03-08T22:45:09.131 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:09.130+0000 7f48e69f9780 -1 bluestore(td/mon-stretch-fail-recovery/8) _read_fsid unparsable uuid 2026-03-08T22:45:12.517 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretch-fail-recovery/8/keyring 2026-03-08T22:45:12.526 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:45:12.526 INFO:tasks.workunit.client.0.vm00.stdout:adding osd8 key to auth repository 2026-03-08T22:45:12.526 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd8 key to auth repository 2026-03-08T22:45:12.526 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretch-fail-recovery/8/keyring auth add osd.8 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:45:18.815 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.8 2026-03-08T22:45:18.815 INFO:tasks.workunit.client.0.vm00.stdout:start osd.8 2026-03-08T22:45:18.815 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 8 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/8 --osd-journal=td/mon-stretch-fail-recovery/8/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:45:18.815 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:45:18.816 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:45:18.817 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:45:18.836 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:18.835+0000 7f9bd2813780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:19.048 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 8 2026-03-08T22:45:19.048 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:45:19.049 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=8 2026-03-08T22:45:19.049 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:45:19.049 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:45:19.049 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:45:19.049 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:45:19.049 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:45:19.050 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.8 up' 2026-03-08T22:45:19.050 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:45:19.392 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:45:20.393 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:45:20.393 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:45:20.393 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:45:20.394 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:45:20.394 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:45:20.394 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.8 up' 2026-03-08T22:45:20.628 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:45:21.629 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:45:21.629 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:45:21.629 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 2 2026-03-08T22:45:21.629 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:45:21.629 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:45:21.629 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.8 up' 2026-03-08T22:45:21.844 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:21.843+0000 7f9bd2813780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:21.846 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:21.844+0000 7f9bd2813780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:21.862 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:45:22.863 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:45:22.863 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:45:22.863 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 3 2026-03-08T22:45:22.863 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:45:22.863 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:45:22.863 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.8 up' 2026-03-08T22:45:22.927 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:22.926+0000 7f9bd2813780 -1 Falling back to public interface 2026-03-08T22:45:23.081 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:45:23.532 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:23.531+0000 7f9bd2813780 -1 osd.8 0 log_to_monitors true 2026-03-08T22:45:24.084 INFO:tasks.workunit.client.0.vm00.stdout:4 2026-03-08T22:45:24.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:45:24.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:45:24.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 4 2026-03-08T22:45:24.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:45:24.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.8 up' 2026-03-08T22:45:27.324 INFO:tasks.workunit.client.0.vm00.stdout:osd.8 up in weight 1 up_from 80 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6834/3320281674,v1:127.0.0.1:6835/3320281674] [v2:127.0.0.1:6836/3320281674,v1:127.0.0.1:6837/3320281674] exists,up a5742ca9-b1e5-47d8-b0fb-68a72ad3ffef 2026-03-08T22:45:27.324 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:45:27.324 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:45:27.324 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:45:27.324 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:139: TEST_stretched_cluster_failover_add_three_osds: run_osd td/mon-stretch-fail-recovery 9 2026-03-08T22:45:27.324 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:45:27.324 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:45:27.324 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=9 2026-03-08T22:45:27.324 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretch-fail-recovery/9 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretch-fail-recovery/9' 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretch-fail-recovery/9/journal' 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretch-fail-recovery' 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:45:27.325 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:45:27.326 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:45:27.326 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:45:27.326 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:45:27.326 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:45:27.327 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:45:27.327 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:45:27.327 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:45:27.327 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretch-fail-recovery/$name.log' 2026-03-08T22:45:27.327 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretch-fail-recovery/$name.pid' 2026-03-08T22:45:27.327 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:45:27.327 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:45:27.327 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:45:27.327 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:45:27.327 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:45:27.327 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:45:27.327 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretch-fail-recovery/9 2026-03-08T22:45:27.328 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:45:27.329 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=bface796-3563-4862-915d-ec2c22813d96 2026-03-08T22:45:27.329 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd9 bface796-3563-4862-915d-ec2c22813d96' 2026-03-08T22:45:27.329 INFO:tasks.workunit.client.0.vm00.stdout:add osd9 bface796-3563-4862-915d-ec2c22813d96 2026-03-08T22:45:27.330 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:45:27.343 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQAH/K1p/9B4FBAADGsCkx/B6i845XMPZ5+7Dg== 2026-03-08T22:45:27.344 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQAH/K1p/9B4FBAADGsCkx/B6i845XMPZ5+7Dg=="}' 2026-03-08T22:45:27.344 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new bface796-3563-4862-915d-ec2c22813d96 -i td/mon-stretch-fail-recovery/9/new.json 2026-03-08T22:45:27.574 INFO:tasks.workunit.client.0.vm00.stdout:9 2026-03-08T22:45:27.585 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretch-fail-recovery/9/new.json 2026-03-08T22:45:27.585 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 9 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/9 --osd-journal=td/mon-stretch-fail-recovery/9/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQAH/K1p/9B4FBAADGsCkx/B6i845XMPZ5+7Dg== --osd-uuid bface796-3563-4862-915d-ec2c22813d96 2026-03-08T22:45:27.603 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:27.602+0000 7f448060f780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:27.606 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:27.605+0000 7f448060f780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:27.607 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:27.606+0000 7f448060f780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:27.607 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:27.606+0000 7f448060f780 -1 bdev(0x557bb15f7c00 td/mon-stretch-fail-recovery/9/block) open stat got: (1) Operation not permitted 2026-03-08T22:45:27.607 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:27.606+0000 7f448060f780 -1 bluestore(td/mon-stretch-fail-recovery/9) _read_fsid unparsable uuid 2026-03-08T22:45:30.487 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretch-fail-recovery/9/keyring 2026-03-08T22:45:30.487 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:45:30.488 INFO:tasks.workunit.client.0.vm00.stdout:adding osd9 key to auth repository 2026-03-08T22:45:30.488 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd9 key to auth repository 2026-03-08T22:45:30.488 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretch-fail-recovery/9/keyring auth add osd.9 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:45:33.784 INFO:tasks.workunit.client.0.vm00.stdout:start osd.9 2026-03-08T22:45:33.785 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.9 2026-03-08T22:45:33.785 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 9 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/9 --osd-journal=td/mon-stretch-fail-recovery/9/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:45:33.785 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:45:33.786 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:45:33.787 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:45:33.804 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:33.802+0000 7f645e00d780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:33.806 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:33.805+0000 7f645e00d780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:33.809 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:33.807+0000 7f645e00d780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:34.025 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 9 2026-03-08T22:45:34.025 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:45:34.025 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=9 2026-03-08T22:45:34.025 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:45:34.025 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:45:34.025 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:45:34.025 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:45:34.026 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:45:34.026 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:45:34.026 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.9 up' 2026-03-08T22:45:34.625 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:34.624+0000 7f645e00d780 -1 Falling back to public interface 2026-03-08T22:45:35.729 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:35.728+0000 7f645e00d780 -1 osd.9 0 log_to_monitors true 2026-03-08T22:45:40.250 INFO:tasks.workunit.client.0.vm00.stdout:osd.9 up in weight 1 up_from 85 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6842/2954017572,v1:127.0.0.1:6843/2954017572] [v2:127.0.0.1:6844/2954017572,v1:127.0.0.1:6845/2954017572] exists,up bface796-3563-4862-915d-ec2c22813d96 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:140: TEST_stretched_cluster_failover_add_three_osds: run_osd td/mon-stretch-fail-recovery 10 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=10 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretch-fail-recovery/10 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretch-fail-recovery/10' 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretch-fail-recovery/10/journal' 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretch-fail-recovery' 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:45:40.251 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:45:40.252 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' 2026-03-08T22:45:40.252 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:45:40.252 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:45:40.252 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:45:40.252 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretch-fail-recovery/$name.log' 2026-03-08T22:45:40.252 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretch-fail-recovery/$name.pid' 2026-03-08T22:45:40.252 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:45:40.252 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:45:40.252 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:45:40.252 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:45:40.252 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:45:40.252 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:45:40.252 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretch-fail-recovery/10 2026-03-08T22:45:40.253 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:45:40.254 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=2fd5e287-08d6-4122-9719-97f5efa4e3f3 2026-03-08T22:45:40.254 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd10 2fd5e287-08d6-4122-9719-97f5efa4e3f3' 2026-03-08T22:45:40.254 INFO:tasks.workunit.client.0.vm00.stdout:add osd10 2fd5e287-08d6-4122-9719-97f5efa4e3f3 2026-03-08T22:45:40.254 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:45:40.266 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQAU/K1pxR7dDxAA3h+P5FZk/YkvymNELHqNXQ== 2026-03-08T22:45:40.267 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQAU/K1pxR7dDxAA3h+P5FZk/YkvymNELHqNXQ=="}' 2026-03-08T22:45:40.267 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new 2fd5e287-08d6-4122-9719-97f5efa4e3f3 -i td/mon-stretch-fail-recovery/10/new.json 2026-03-08T22:45:40.499 INFO:tasks.workunit.client.0.vm00.stdout:10 2026-03-08T22:45:40.510 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretch-fail-recovery/10/new.json 2026-03-08T22:45:40.510 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 10 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/10 --osd-journal=td/mon-stretch-fail-recovery/10/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQAU/K1pxR7dDxAA3h+P5FZk/YkvymNELHqNXQ== --osd-uuid 2fd5e287-08d6-4122-9719-97f5efa4e3f3 2026-03-08T22:45:40.528 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:40.527+0000 7f6696402780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:43.530 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:43.529+0000 7f6696402780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:43.531 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:43.531+0000 7f6696402780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:43.532 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:43.531+0000 7f6696402780 -1 bdev(0x55805781ac00 td/mon-stretch-fail-recovery/10/block) open stat got: (1) Operation not permitted 2026-03-08T22:45:43.532 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:43.531+0000 7f6696402780 -1 bluestore(td/mon-stretch-fail-recovery/10) _read_fsid unparsable uuid 2026-03-08T22:45:46.211 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretch-fail-recovery/10/keyring 2026-03-08T22:45:46.211 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:45:46.212 INFO:tasks.workunit.client.0.vm00.stdout:adding osd10 key to auth repository 2026-03-08T22:45:46.212 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd10 key to auth repository 2026-03-08T22:45:46.212 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretch-fail-recovery/10/keyring auth add osd.10 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:45:46.503 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.10 2026-03-08T22:45:46.503 INFO:tasks.workunit.client.0.vm00.stdout:start osd.10 2026-03-08T22:45:46.503 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 10 --fsid=e6beb2c8-8f22-428a-b327-33ee467015ad --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretch-fail-recovery/10 --osd-journal=td/mon-stretch-fail-recovery/10/journal --chdir= --run-dir=td/mon-stretch-fail-recovery '--admin-socket=/tmp/ceph-asok.51725/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretch-fail-recovery/$name.log' '--pid-file=td/mon-stretch-fail-recovery/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:45:46.503 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:45:46.504 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:45:46.506 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:45:46.521 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:46.519+0000 7f7f75dd4780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:49.526 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:49.525+0000 7f7f75dd4780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:49.527 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:49.526+0000 7f7f75dd4780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:45:49.736 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 10 2026-03-08T22:45:49.736 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:45:49.736 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=10 2026-03-08T22:45:49.736 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:45:49.736 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:45:49.737 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:45:49.737 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:45:49.737 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:45:49.737 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:45:49.737 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.10 up' 2026-03-08T22:45:50.032 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:45:50.862 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:50.860+0000 7f7f75dd4780 -1 Falling back to public interface 2026-03-08T22:45:51.035 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:45:51.035 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:45:51.035 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:45:51.035 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:45:51.035 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:45:51.035 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.10 up' 2026-03-08T22:45:51.280 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:45:51.786 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:51.785+0000 7f7f75dd4780 -1 osd.10 0 log_to_monitors true 2026-03-08T22:45:52.283 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:45:52.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:45:52.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:45:52.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 2 2026-03-08T22:45:52.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:45:52.283 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.10 up' 2026-03-08T22:45:52.529 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:45:53.531 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:45:53.531 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:45:53.531 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 3 2026-03-08T22:45:53.531 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:45:53.532 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:45:53.532 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.10 up' 2026-03-08T22:45:53.758 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:45:54.280 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:45:54.279+0000 7f7f71575640 -1 osd.10 0 waiting for initial osdmap 2026-03-08T22:45:54.760 INFO:tasks.workunit.client.0.vm00.stdout:4 2026-03-08T22:45:54.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:45:54.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:45:54.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 4 2026-03-08T22:45:54.761 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:45:54.761 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.10 up' 2026-03-08T22:45:58.011 INFO:tasks.workunit.client.0.vm00.stdout:osd.10 up in weight 1 up_from 90 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6850/709741490,v1:127.0.0.1:6851/709741490] [v2:127.0.0.1:6852/709741490,v1:127.0.0.1:6853/709741490] exists,up 2fd5e287-08d6-4122-9719-97f5efa4e3f3 2026-03-08T22:45:58.011 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:45:58.012 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:45:58.012 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:45:58.012 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:142: TEST_stretched_cluster_failover_add_three_osds: ceph -s 2026-03-08T22:45:58.305 INFO:tasks.workunit.client.0.vm00.stdout: cluster: 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: id: e6beb2c8-8f22-428a-b327-33ee467015ad 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: health: HEALTH_WARN 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: We are missing stretch mode buckets, only requiring 1 of 2 buckets to peer 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: 2/5 mons down, quorum a,b,e 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: 4 osds down 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: 2 hosts (4 osds) down 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: 1 zone (4 osds) down 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: 1 pool(s) do not have an application enabled 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: services: 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: mon: 5 daemons, quorum a,b,e (age 55s), out of quorum: c, d 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: mgr: y(active, since 2m), standbys: x, z 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: osd: 11 osds: 7 up (since 3s), 11 in (since 17s) 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: data: 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: pools: 1 pools, 32 pgs 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: objects: 0 objects, 0 B 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: usage: 297 MiB used, 1.1 TiB / 1.1 TiB avail 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: pgs: 32 active+undersized 2026-03-08T22:45:58.306 INFO:tasks.workunit.client.0.vm00.stdout: 2026-03-08T22:45:58.315 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:144: TEST_stretched_cluster_failover_add_three_osds: sleep 3 2026-03-08T22:46:01.317 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:146: TEST_stretched_cluster_failover_add_three_osds: teardown td/mon-stretch-fail-recovery 2026-03-08T22:46:01.317 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:164: teardown: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:46:01.317 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:165: teardown: local dumplogs= 2026-03-08T22:46:01.317 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:166: teardown: kill_daemons td/mon-stretch-fail-recovery KILL 2026-03-08T22:46:01.317 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:46:01.317 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:46:01.317 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:46:01.317 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:46:01.317 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:46:01.522 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:46:01.522 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: uname 2026-03-08T22:46:01.523 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: '[' Linux '!=' FreeBSD ']' 2026-03-08T22:46:01.523 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: stat -f -c %T . 2026-03-08T22:46:01.524 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: '[' xfs == btrfs ']' 2026-03-08T22:46:01.524 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:171: teardown: local cores=no 2026-03-08T22:46:01.524 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: sysctl -n kernel.core_pattern 2026-03-08T22:46:01.525 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: local pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:46:01.525 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:174: teardown: '[' / = '|' ']' 2026-03-08T22:46:01.525 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: grep -q '^core\|core$' 2026-03-08T22:46:01.525 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: dirname /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:46:01.526 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: ls /home/ubuntu/cephtest/archive/coredump 2026-03-08T22:46:01.527 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:189: teardown: '[' no = yes -o '' = 1 ']' 2026-03-08T22:46:01.527 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:198: teardown: rm -fr td/mon-stretch-fail-recovery 2026-03-08T22:46:01.599 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: get_asok_dir 2026-03-08T22:46:01.599 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:01.599 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:46:01.600 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: rm -rf /tmp/ceph-asok.51725 2026-03-08T22:46:01.604 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:200: teardown: '[' no = yes ']' 2026-03-08T22:46:01.604 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:207: teardown: return 0 2026-03-08T22:46:01.605 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-fail-recovery.sh:23: run: teardown td/mon-stretch-fail-recovery 2026-03-08T22:46:01.605 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:164: teardown: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:46:01.605 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:165: teardown: local dumplogs= 2026-03-08T22:46:01.605 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:166: teardown: kill_daemons td/mon-stretch-fail-recovery KILL 2026-03-08T22:46:01.605 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:46:01.605 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:46:01.605 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:46:01.605 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:46:01.605 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:46:01.607 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:46:01.607 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: uname 2026-03-08T22:46:01.608 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: '[' Linux '!=' FreeBSD ']' 2026-03-08T22:46:01.608 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: stat -f -c %T . 2026-03-08T22:46:01.609 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: '[' xfs == btrfs ']' 2026-03-08T22:46:01.609 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:171: teardown: local cores=no 2026-03-08T22:46:01.609 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: sysctl -n kernel.core_pattern 2026-03-08T22:46:01.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: local pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:46:01.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:174: teardown: '[' / = '|' ']' 2026-03-08T22:46:01.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: grep -q '^core\|core$' 2026-03-08T22:46:01.610 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: dirname /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:46:01.611 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: ls /home/ubuntu/cephtest/archive/coredump 2026-03-08T22:46:01.612 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:189: teardown: '[' no = yes -o '' = 1 ']' 2026-03-08T22:46:01.612 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:198: teardown: rm -fr td/mon-stretch-fail-recovery 2026-03-08T22:46:01.613 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: get_asok_dir 2026-03-08T22:46:01.613 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:01.613 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:46:01.613 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: rm -rf /tmp/ceph-asok.51725 2026-03-08T22:46:01.613 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:200: teardown: '[' no = yes ']' 2026-03-08T22:46:01.614 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:207: teardown: return 0 2026-03-08T22:46:01.614 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2377: main: code=0 2026-03-08T22:46:01.614 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2381: main: teardown td/mon-stretch-fail-recovery 0 2026-03-08T22:46:01.614 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:164: teardown: local dir=td/mon-stretch-fail-recovery 2026-03-08T22:46:01.614 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:165: teardown: local dumplogs=0 2026-03-08T22:46:01.614 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:166: teardown: kill_daemons td/mon-stretch-fail-recovery KILL 2026-03-08T22:46:01.614 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:46:01.614 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:46:01.614 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:46:01.614 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:46:01.614 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:46:01.616 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:46:01.616 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: uname 2026-03-08T22:46:01.616 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: '[' Linux '!=' FreeBSD ']' 2026-03-08T22:46:01.617 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: stat -f -c %T . 2026-03-08T22:46:01.617 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: '[' xfs == btrfs ']' 2026-03-08T22:46:01.617 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:171: teardown: local cores=no 2026-03-08T22:46:01.618 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: sysctl -n kernel.core_pattern 2026-03-08T22:46:01.618 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: local pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:46:01.618 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:174: teardown: '[' / = '|' ']' 2026-03-08T22:46:01.619 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: grep -q '^core\|core$' 2026-03-08T22:46:01.619 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: dirname /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:46:01.619 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: ls /home/ubuntu/cephtest/archive/coredump 2026-03-08T22:46:01.620 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:189: teardown: '[' no = yes -o 0 = 1 ']' 2026-03-08T22:46:01.620 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:198: teardown: rm -fr td/mon-stretch-fail-recovery 2026-03-08T22:46:01.621 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: get_asok_dir 2026-03-08T22:46:01.621 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:01.621 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.51725 2026-03-08T22:46:01.621 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: rm -rf /tmp/ceph-asok.51725 2026-03-08T22:46:01.622 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:200: teardown: '[' no = yes ']' 2026-03-08T22:46:01.622 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:207: teardown: return 0 2026-03-08T22:46:01.622 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2382: main: return 0 2026-03-08T22:46:01.622 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-08T22:46:01.646 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-08T22:46:01.685 INFO:tasks.workunit:Running workunit mon-stretch/mon-stretch-uneven-crush-weights.sh... 2026-03-08T22:46:01.695 DEBUG:teuthology.orchestra.run.vm00:workunit test mon-stretch/mon-stretch-uneven-crush-weights.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=569c3e99c9b32a51b4eaf08731c728f4513ed589 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh 2026-03-08T22:46:01.745 INFO:tasks.workunit.client.0.vm00.stderr:stty: 'standard input': Inappropriate ioctl for device 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:+ PS4='${BASH_SOURCE[0]}:$LINENO: ${FUNCNAME[0]}: ' 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2370: main: export PATH=.:/home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2370: main: PATH=.:/home/ubuntu/.local/bin:/home/ubuntu/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/sbin 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2371: main: export PYTHONWARNINGS=ignore 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2371: main: PYTHONWARNINGS=ignore 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2372: main: export CEPH_CONF=/dev/null 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2372: main: CEPH_CONF=/dev/null 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2373: main: unset CEPH_ARGS 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2375: main: local code 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2376: main: run td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:5: run: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:6: run: shift 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:8: run: export CEPH_MON_A=127.0.0.1:7139 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:8: run: CEPH_MON_A=127.0.0.1:7139 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:9: run: export CEPH_MON_B=127.0.0.1:7141 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:9: run: CEPH_MON_B=127.0.0.1:7141 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:10: run: export CEPH_MON_C=127.0.0.1:7142 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:10: run: CEPH_MON_C=127.0.0.1:7142 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:11: run: export CEPH_MON_D=127.0.0.1:7143 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:11: run: CEPH_MON_D=127.0.0.1:7143 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:12: run: export CEPH_MON_E=127.0.0.1:7144 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:12: run: CEPH_MON_E=127.0.0.1:7144 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:13: run: export CEPH_ARGS 2026-03-08T22:46:01.749 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:14: run: uuidgen 2026-03-08T22:46:01.750 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:14: run: CEPH_ARGS+='--fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none ' 2026-03-08T22:46:01.750 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:16: run: export 'BASE_CEPH_ARGS=--fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none ' 2026-03-08T22:46:01.750 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:16: run: BASE_CEPH_ARGS='--fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none ' 2026-03-08T22:46:01.750 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:17: run: CEPH_ARGS+=--mon-host=127.0.0.1:7139 2026-03-08T22:46:01.750 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:19: run: set 2026-03-08T22:46:01.750 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:19: run: sed -n -e 's/^\(TEST_[0-9a-z_]*\) .*/\1/p' 2026-03-08T22:46:01.751 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:19: run: local funcs=TEST_stretched_cluster_uneven_weight 2026-03-08T22:46:01.751 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:20: run: for func in $funcs 2026-03-08T22:46:01.751 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:21: run: setup td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.751 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:131: setup: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.751 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:132: setup: teardown td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.752 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:164: teardown: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.752 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:165: teardown: local dumplogs= 2026-03-08T22:46:01.752 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:166: teardown: kill_daemons td/mon-stretched-cluster-uneven-weight KILL 2026-03-08T22:46:01.752 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:46:01.752 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:46:01.752 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:46:01.752 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:46:01.752 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:46:01.753 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:46:01.753 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: uname 2026-03-08T22:46:01.754 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: '[' Linux '!=' FreeBSD ']' 2026-03-08T22:46:01.754 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: stat -f -c %T . 2026-03-08T22:46:01.755 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: '[' xfs == btrfs ']' 2026-03-08T22:46:01.755 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:171: teardown: local cores=no 2026-03-08T22:46:01.755 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: sysctl -n kernel.core_pattern 2026-03-08T22:46:01.756 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: local pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:46:01.756 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:174: teardown: '[' / = '|' ']' 2026-03-08T22:46:01.756 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: grep -q '^core\|core$' 2026-03-08T22:46:01.756 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: dirname /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:46:01.757 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: ls /home/ubuntu/cephtest/archive/coredump 2026-03-08T22:46:01.757 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:189: teardown: '[' no = yes -o '' = 1 ']' 2026-03-08T22:46:01.757 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:198: teardown: rm -fr td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.758 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: get_asok_dir 2026-03-08T22:46:01.763 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:01.763 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:01.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: rm -rf /tmp/ceph-asok.68828 2026-03-08T22:46:01.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:200: teardown: '[' no = yes ']' 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:207: teardown: return 0 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:133: setup: mkdir -p td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:134: setup: get_asok_dir 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:134: setup: mkdir -p /tmp/ceph-asok.68828 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:135: setup: ulimit -n 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:135: setup: '[' 1024 -le 1024 ']' 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:136: setup: ulimit -n 4096 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:138: setup: '[' -z '' ']' 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:139: setup: trap 'teardown td/mon-stretched-cluster-uneven-weight 1' TERM HUP INT 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:22: run: TEST_stretched_cluster_uneven_weight td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:27: TEST_stretched_cluster_uneven_weight: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:28: TEST_stretched_cluster_uneven_weight: local OSDS=4 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:29: TEST_stretched_cluster_uneven_weight: local weight=0.09000 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:30: TEST_stretched_cluster_uneven_weight: setup td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:131: setup: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:132: setup: teardown td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:164: teardown: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:165: teardown: local dumplogs= 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:166: teardown: kill_daemons td/mon-stretched-cluster-uneven-weight KILL 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:46:01.764 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:46:01.765 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:46:01.765 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: uname 2026-03-08T22:46:01.766 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: '[' Linux '!=' FreeBSD ']' 2026-03-08T22:46:01.766 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: stat -f -c %T . 2026-03-08T22:46:01.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: '[' xfs == btrfs ']' 2026-03-08T22:46:01.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:171: teardown: local cores=no 2026-03-08T22:46:01.767 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: sysctl -n kernel.core_pattern 2026-03-08T22:46:01.768 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: local pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:46:01.768 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:174: teardown: '[' / = '|' ']' 2026-03-08T22:46:01.768 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: grep -q '^core\|core$' 2026-03-08T22:46:01.768 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: dirname /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:46:01.769 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: ls /home/ubuntu/cephtest/archive/coredump 2026-03-08T22:46:01.770 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:189: teardown: '[' no = yes -o '' = 1 ']' 2026-03-08T22:46:01.770 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:198: teardown: rm -fr td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.771 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: get_asok_dir 2026-03-08T22:46:01.771 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:01.771 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:01.771 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: rm -rf /tmp/ceph-asok.68828 2026-03-08T22:46:01.772 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:200: teardown: '[' no = yes ']' 2026-03-08T22:46:01.772 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:207: teardown: return 0 2026-03-08T22:46:01.772 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:133: setup: mkdir -p td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.773 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:134: setup: get_asok_dir 2026-03-08T22:46:01.773 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:01.773 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:01.773 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:134: setup: mkdir -p /tmp/ceph-asok.68828 2026-03-08T22:46:01.775 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:135: setup: ulimit -n 2026-03-08T22:46:01.775 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:135: setup: '[' 4096 -le 1024 ']' 2026-03-08T22:46:01.775 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:138: setup: '[' -z '' ']' 2026-03-08T22:46:01.775 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:139: setup: trap 'teardown td/mon-stretched-cluster-uneven-weight 1' TERM HUP INT 2026-03-08T22:46:01.775 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:32: TEST_stretched_cluster_uneven_weight: run_mon td/mon-stretched-cluster-uneven-weight a --public-addr 127.0.0.1:7139 2026-03-08T22:46:01.775 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:448: run_mon: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:01.775 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:449: run_mon: shift 2026-03-08T22:46:01.775 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:450: run_mon: local id=a 2026-03-08T22:46:01.775 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:451: run_mon: shift 2026-03-08T22:46:01.775 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:452: run_mon: local data=td/mon-stretched-cluster-uneven-weight/a 2026-03-08T22:46:01.775 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:455: run_mon: ceph-mon --id a --mkfs --mon-data=td/mon-stretched-cluster-uneven-weight/a --run-dir=td/mon-stretched-cluster-uneven-weight --public-addr 127.0.0.1:7139 2026-03-08T22:46:02.012 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: get_asok_path 2026-03-08T22:46:02.012 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:46:02.012 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:46:02.012 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:46:02.012 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:02.012 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:02.013 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:02.013 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: ceph-mon --id a --osd-failsafe-full-ratio=.99 --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --mon-data-avail-warn=5 --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=td/mon-stretched-cluster-uneven-weight/a '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --mon-cluster-log-file=td/mon-stretched-cluster-uneven-weight/log --run-dir=td/mon-stretched-cluster-uneven-weight '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --mon-allow-pool-delete --mon-allow-pool-size-one --osd-pool-default-pg-autoscale-mode off --mon-osd-backfillfull-ratio .99 --mon-warn-on-insecure-global-id-reclaim-allowed=false --public-addr 127.0.0.1:7139 2026-03-08T22:46:02.048 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: cat 2026-03-08T22:46:02.048 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon a fsid 2026-03-08T22:46:02.048 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:46:02.048 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=a 2026-03-08T22:46:02.048 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=fsid 2026-03-08T22:46:02.049 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .fsid 2026-03-08T22:46:02.049 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.a 2026-03-08T22:46:02.049 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.a 2026-03-08T22:46:02.049 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.a ']' 2026-03-08T22:46:02.050 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:46:02.050 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:02.051 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:02.051 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.68828/ceph-mon.a.asok 2026-03-08T22:46:02.051 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:46:02.051 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.68828/ceph-mon.a.asok config get fsid 2026-03-08T22:46:02.102 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon a mon_host 2026-03-08T22:46:02.102 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:46:02.102 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=a 2026-03-08T22:46:02.102 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=mon_host 2026-03-08T22:46:02.102 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .mon_host 2026-03-08T22:46:02.102 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.a 2026-03-08T22:46:02.102 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.a 2026-03-08T22:46:02.102 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.a ']' 2026-03-08T22:46:02.103 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:46:02.103 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:02.103 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:02.103 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.68828/ceph-mon.a.asok 2026-03-08T22:46:02.103 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:46:02.103 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.68828/ceph-mon.a.asok config get mon_host 2026-03-08T22:46:02.154 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:33: TEST_stretched_cluster_uneven_weight: wait_for_quorum 300 1 2026-03-08T22:46:02.155 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1055: wait_for_quorum: local timeout=300 2026-03-08T22:46:02.155 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1056: wait_for_quorum: local quorumsize=1 2026-03-08T22:46:02.155 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1058: wait_for_quorum: [[ -z 300 ]] 2026-03-08T22:46:02.155 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1062: wait_for_quorum: [[ -z 1 ]] 2026-03-08T22:46:02.155 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1067: wait_for_quorum: no_quorum=1 2026-03-08T22:46:02.155 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: date +%s 2026-03-08T22:46:02.155 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: wait_until=1773010262 2026-03-08T22:46:02.156 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:46:02.156 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009962 -lt 1773010262 ]] 2026-03-08T22:46:02.156 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 1' 2026-03-08T22:46:02.157 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:46:02.275 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:46:02.275 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":3,"quorum":[0],"quorum_names":["a"],"quorum_leader_name":"a","quorum_age":0,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":1,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:01.790407Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:02.276 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":3,"quorum":[0],"quorum_names":["a"],"quorum_leader_name":"a","quorum_age":0,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":1,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:01.790407Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:02.276 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 1' 2026-03-08T22:46:02.279 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=true 2026-03-08T22:46:02.279 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ true == \t\r\u\e ]] 2026-03-08T22:46:02.279 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1074: wait_for_quorum: no_quorum=0 2026-03-08T22:46:02.279 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1075: wait_for_quorum: break 2026-03-08T22:46:02.279 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1078: wait_for_quorum: return 0 2026-03-08T22:46:02.279 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:35: TEST_stretched_cluster_uneven_weight: run_mon td/mon-stretched-cluster-uneven-weight b --public-addr 127.0.0.1:7141 2026-03-08T22:46:02.279 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:448: run_mon: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:02.279 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:449: run_mon: shift 2026-03-08T22:46:02.279 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:450: run_mon: local id=b 2026-03-08T22:46:02.279 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:451: run_mon: shift 2026-03-08T22:46:02.279 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:452: run_mon: local data=td/mon-stretched-cluster-uneven-weight/b 2026-03-08T22:46:02.279 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:455: run_mon: ceph-mon --id b --mkfs --mon-data=td/mon-stretched-cluster-uneven-weight/b --run-dir=td/mon-stretched-cluster-uneven-weight --public-addr 127.0.0.1:7141 2026-03-08T22:46:02.314 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: get_asok_path 2026-03-08T22:46:02.314 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:46:02.314 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:46:02.314 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:46:02.314 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:02.314 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:02.314 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:02.314 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: ceph-mon --id b --osd-failsafe-full-ratio=.99 --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --mon-data-avail-warn=5 --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=td/mon-stretched-cluster-uneven-weight/b '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --mon-cluster-log-file=td/mon-stretched-cluster-uneven-weight/log --run-dir=td/mon-stretched-cluster-uneven-weight '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --mon-allow-pool-delete --mon-allow-pool-size-one --osd-pool-default-pg-autoscale-mode off --mon-osd-backfillfull-ratio .99 --mon-warn-on-insecure-global-id-reclaim-allowed=false --public-addr 127.0.0.1:7141 2026-03-08T22:46:02.352 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: cat 2026-03-08T22:46:02.353 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon b fsid 2026-03-08T22:46:02.353 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:46:02.353 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=b 2026-03-08T22:46:02.353 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=fsid 2026-03-08T22:46:02.353 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .fsid 2026-03-08T22:46:02.353 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.b 2026-03-08T22:46:02.353 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.b 2026-03-08T22:46:02.353 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.b ']' 2026-03-08T22:46:02.355 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:46:02.355 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:02.355 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:02.355 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.68828/ceph-mon.b.asok 2026-03-08T22:46:02.356 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:46:02.356 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.68828/ceph-mon.b.asok config get fsid 2026-03-08T22:46:02.407 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon b mon_host 2026-03-08T22:46:02.408 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:46:02.408 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=b 2026-03-08T22:46:02.408 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=mon_host 2026-03-08T22:46:02.408 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.b 2026-03-08T22:46:02.408 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .mon_host 2026-03-08T22:46:02.408 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.b 2026-03-08T22:46:02.408 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.b ']' 2026-03-08T22:46:02.408 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:46:02.408 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:02.408 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:02.408 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.68828/ceph-mon.b.asok 2026-03-08T22:46:02.408 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:46:02.408 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.68828/ceph-mon.b.asok config get mon_host 2026-03-08T22:46:02.459 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:36: TEST_stretched_cluster_uneven_weight: CEPH_ARGS='--fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141' 2026-03-08T22:46:02.459 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:37: TEST_stretched_cluster_uneven_weight: wait_for_quorum 300 2 2026-03-08T22:46:02.459 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1055: wait_for_quorum: local timeout=300 2026-03-08T22:46:02.459 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1056: wait_for_quorum: local quorumsize=2 2026-03-08T22:46:02.459 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1058: wait_for_quorum: [[ -z 300 ]] 2026-03-08T22:46:02.459 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1062: wait_for_quorum: [[ -z 2 ]] 2026-03-08T22:46:02.459 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1067: wait_for_quorum: no_quorum=1 2026-03-08T22:46:02.459 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: date +%s 2026-03-08T22:46:02.459 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: wait_until=1773010262 2026-03-08T22:46:02.459 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:46:02.460 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009962 -lt 1773010262 ]] 2026-03-08T22:46:02.460 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 2' 2026-03-08T22:46:02.460 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:46:11.579 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:46:11.579 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":8,"quorum":[0,1],"quorum_names":["a","b"],"quorum_leader_name":"a","quorum_age":4,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":2,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:02.349184Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:11.579 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":8,"quorum":[0,1],"quorum_names":["a","b"],"quorum_leader_name":"a","quorum_age":4,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":2,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:02.349184Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:11.579 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 2' 2026-03-08T22:46:11.581 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=true 2026-03-08T22:46:11.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ true == \t\r\u\e ]] 2026-03-08T22:46:11.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1074: wait_for_quorum: no_quorum=0 2026-03-08T22:46:11.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1075: wait_for_quorum: break 2026-03-08T22:46:11.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1078: wait_for_quorum: return 0 2026-03-08T22:46:11.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:39: TEST_stretched_cluster_uneven_weight: run_mon td/mon-stretched-cluster-uneven-weight c --public-addr 127.0.0.1:7142 2026-03-08T22:46:11.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:448: run_mon: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:11.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:449: run_mon: shift 2026-03-08T22:46:11.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:450: run_mon: local id=c 2026-03-08T22:46:11.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:451: run_mon: shift 2026-03-08T22:46:11.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:452: run_mon: local data=td/mon-stretched-cluster-uneven-weight/c 2026-03-08T22:46:11.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:455: run_mon: ceph-mon --id c --mkfs --mon-data=td/mon-stretched-cluster-uneven-weight/c --run-dir=td/mon-stretched-cluster-uneven-weight --public-addr 127.0.0.1:7142 2026-03-08T22:46:11.629 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: get_asok_path 2026-03-08T22:46:11.629 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:46:11.629 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:46:11.629 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:46:11.629 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:11.629 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:11.629 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:11.629 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: ceph-mon --id c --osd-failsafe-full-ratio=.99 --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --mon-data-avail-warn=5 --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=td/mon-stretched-cluster-uneven-weight/c '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --mon-cluster-log-file=td/mon-stretched-cluster-uneven-weight/log --run-dir=td/mon-stretched-cluster-uneven-weight '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --mon-allow-pool-delete --mon-allow-pool-size-one --osd-pool-default-pg-autoscale-mode off --mon-osd-backfillfull-ratio .99 --mon-warn-on-insecure-global-id-reclaim-allowed=false --public-addr 127.0.0.1:7142 2026-03-08T22:46:11.665 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: cat 2026-03-08T22:46:11.666 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon c fsid 2026-03-08T22:46:11.666 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:46:11.666 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=c 2026-03-08T22:46:11.666 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=fsid 2026-03-08T22:46:11.666 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .fsid 2026-03-08T22:46:11.666 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.c 2026-03-08T22:46:11.666 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.c 2026-03-08T22:46:11.666 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.c ']' 2026-03-08T22:46:11.666 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:46:11.666 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:11.666 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:11.667 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.68828/ceph-mon.c.asok 2026-03-08T22:46:11.667 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:46:11.667 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.68828/ceph-mon.c.asok config get fsid 2026-03-08T22:46:11.728 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon c mon_host 2026-03-08T22:46:11.728 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:46:11.728 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=c 2026-03-08T22:46:11.728 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=mon_host 2026-03-08T22:46:11.728 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .mon_host 2026-03-08T22:46:11.728 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.c 2026-03-08T22:46:11.728 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.c 2026-03-08T22:46:11.728 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.c ']' 2026-03-08T22:46:11.730 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:46:11.730 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:11.730 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:11.730 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.68828/ceph-mon.c.asok 2026-03-08T22:46:11.730 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:46:11.730 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.68828/ceph-mon.c.asok config get mon_host 2026-03-08T22:46:11.783 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:40: TEST_stretched_cluster_uneven_weight: CEPH_ARGS='--fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142' 2026-03-08T22:46:11.783 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:41: TEST_stretched_cluster_uneven_weight: wait_for_quorum 300 3 2026-03-08T22:46:11.783 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1055: wait_for_quorum: local timeout=300 2026-03-08T22:46:11.783 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1056: wait_for_quorum: local quorumsize=3 2026-03-08T22:46:11.783 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1058: wait_for_quorum: [[ -z 300 ]] 2026-03-08T22:46:11.783 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1062: wait_for_quorum: [[ -z 3 ]] 2026-03-08T22:46:11.783 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1067: wait_for_quorum: no_quorum=1 2026-03-08T22:46:11.783 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: date +%s 2026-03-08T22:46:11.784 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: wait_until=1773010271 2026-03-08T22:46:11.784 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:46:11.785 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009971 -lt 1773010271 ]] 2026-03-08T22:46:11.785 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 3' 2026-03-08T22:46:11.785 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:46:17.909 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:46:17.909 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":12,"quorum":[0,1,2],"quorum_names":["a","b","c"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":3,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:11.673931Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:17.909 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":12,"quorum":[0,1,2],"quorum_names":["a","b","c"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":3,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:11.673931Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:17.909 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 3' 2026-03-08T22:46:17.911 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=true 2026-03-08T22:46:17.911 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ true == \t\r\u\e ]] 2026-03-08T22:46:17.911 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1074: wait_for_quorum: no_quorum=0 2026-03-08T22:46:17.911 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1075: wait_for_quorum: break 2026-03-08T22:46:17.911 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1078: wait_for_quorum: return 0 2026-03-08T22:46:17.911 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:43: TEST_stretched_cluster_uneven_weight: run_mon td/mon-stretched-cluster-uneven-weight d --public-addr 127.0.0.1:7143 2026-03-08T22:46:17.911 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:448: run_mon: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:17.911 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:449: run_mon: shift 2026-03-08T22:46:17.911 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:450: run_mon: local id=d 2026-03-08T22:46:17.911 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:451: run_mon: shift 2026-03-08T22:46:17.911 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:452: run_mon: local data=td/mon-stretched-cluster-uneven-weight/d 2026-03-08T22:46:17.911 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:455: run_mon: ceph-mon --id d --mkfs --mon-data=td/mon-stretched-cluster-uneven-weight/d --run-dir=td/mon-stretched-cluster-uneven-weight --public-addr 127.0.0.1:7143 2026-03-08T22:46:17.938 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: get_asok_path 2026-03-08T22:46:17.938 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:46:17.938 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:46:17.938 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:46:17.938 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:17.938 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:17.938 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:17.938 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: ceph-mon --id d --osd-failsafe-full-ratio=.99 --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --mon-data-avail-warn=5 --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=td/mon-stretched-cluster-uneven-weight/d '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --mon-cluster-log-file=td/mon-stretched-cluster-uneven-weight/log --run-dir=td/mon-stretched-cluster-uneven-weight '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --mon-allow-pool-delete --mon-allow-pool-size-one --osd-pool-default-pg-autoscale-mode off --mon-osd-backfillfull-ratio .99 --mon-warn-on-insecure-global-id-reclaim-allowed=false --public-addr 127.0.0.1:7143 2026-03-08T22:46:17.969 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: cat 2026-03-08T22:46:17.969 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon d fsid 2026-03-08T22:46:17.969 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:46:17.969 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=d 2026-03-08T22:46:17.969 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=fsid 2026-03-08T22:46:17.970 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .fsid 2026-03-08T22:46:17.970 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.d 2026-03-08T22:46:17.970 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.d 2026-03-08T22:46:17.970 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.d ']' 2026-03-08T22:46:17.973 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:46:17.973 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:17.973 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:17.973 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.68828/ceph-mon.d.asok 2026-03-08T22:46:17.973 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:46:17.973 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.68828/ceph-mon.d.asok config get fsid 2026-03-08T22:46:18.030 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon d mon_host 2026-03-08T22:46:18.030 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:46:18.030 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=d 2026-03-08T22:46:18.030 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=mon_host 2026-03-08T22:46:18.030 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .mon_host 2026-03-08T22:46:18.030 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.d 2026-03-08T22:46:18.030 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.d 2026-03-08T22:46:18.031 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.d ']' 2026-03-08T22:46:18.031 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:46:18.031 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:18.031 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:18.031 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.68828/ceph-mon.d.asok 2026-03-08T22:46:18.031 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:46:18.031 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.68828/ceph-mon.d.asok config get mon_host 2026-03-08T22:46:18.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:44: TEST_stretched_cluster_uneven_weight: CEPH_ARGS='--fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143' 2026-03-08T22:46:18.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:45: TEST_stretched_cluster_uneven_weight: wait_for_quorum 300 4 2026-03-08T22:46:18.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1055: wait_for_quorum: local timeout=300 2026-03-08T22:46:18.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1056: wait_for_quorum: local quorumsize=4 2026-03-08T22:46:18.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1058: wait_for_quorum: [[ -z 300 ]] 2026-03-08T22:46:18.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1062: wait_for_quorum: [[ -z 4 ]] 2026-03-08T22:46:18.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1067: wait_for_quorum: no_quorum=1 2026-03-08T22:46:18.084 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: date +%s 2026-03-08T22:46:18.084 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: wait_until=1773010278 2026-03-08T22:46:18.084 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:46:18.085 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009978 -lt 1773010278 ]] 2026-03-08T22:46:18.085 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 4' 2026-03-08T22:46:18.085 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:46:24.201 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:46:24.202 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":14,"quorum":[0,1,2],"quorum_names":["a","b","c"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":4,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:17.973440Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:24.202 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 4' 2026-03-08T22:46:24.202 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":14,"quorum":[0,1,2],"quorum_names":["a","b","c"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":4,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:17.973440Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:24.204 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=false 2026-03-08T22:46:24.204 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ false == \t\r\u\e ]] 2026-03-08T22:46:24.204 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:46:24.205 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009984 -lt 1773010278 ]] 2026-03-08T22:46:24.205 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 4' 2026-03-08T22:46:24.205 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:46:24.331 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:46:24.331 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":14,"quorum":[0,1,2],"quorum_names":["a","b","c"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":4,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:17.973440Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:24.331 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":14,"quorum":[0,1,2],"quorum_names":["a","b","c"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":4,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:17.973440Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:24.331 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 4' 2026-03-08T22:46:24.333 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=false 2026-03-08T22:46:24.333 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ false == \t\r\u\e ]] 2026-03-08T22:46:24.333 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:46:24.334 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009984 -lt 1773010278 ]] 2026-03-08T22:46:24.334 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 4' 2026-03-08T22:46:24.334 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:46:27.453 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:46:27.453 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":16,"quorum":[0,1,2,3],"quorum_names":["a","b","c","d"],"quorum_leader_name":"a","quorum_age":2,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":4,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:17.973440Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:27.453 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":16,"quorum":[0,1,2,3],"quorum_names":["a","b","c","d"],"quorum_leader_name":"a","quorum_age":2,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":4,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:17.973440Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:27.453 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 4' 2026-03-08T22:46:27.456 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=true 2026-03-08T22:46:27.456 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ true == \t\r\u\e ]] 2026-03-08T22:46:27.456 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1074: wait_for_quorum: no_quorum=0 2026-03-08T22:46:27.456 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1075: wait_for_quorum: break 2026-03-08T22:46:27.456 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1078: wait_for_quorum: return 0 2026-03-08T22:46:27.456 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:47: TEST_stretched_cluster_uneven_weight: run_mon td/mon-stretched-cluster-uneven-weight e --public-addr 127.0.0.1:7144 2026-03-08T22:46:27.456 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:448: run_mon: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:27.456 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:449: run_mon: shift 2026-03-08T22:46:27.456 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:450: run_mon: local id=e 2026-03-08T22:46:27.456 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:451: run_mon: shift 2026-03-08T22:46:27.456 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:452: run_mon: local data=td/mon-stretched-cluster-uneven-weight/e 2026-03-08T22:46:27.456 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:455: run_mon: ceph-mon --id e --mkfs --mon-data=td/mon-stretched-cluster-uneven-weight/e --run-dir=td/mon-stretched-cluster-uneven-weight --public-addr 127.0.0.1:7144 2026-03-08T22:46:27.487 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: get_asok_path 2026-03-08T22:46:27.487 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:46:27.487 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:46:27.487 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:46:27.487 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:27.487 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:27.487 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:27.488 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:462: run_mon: ceph-mon --id e --osd-failsafe-full-ratio=.99 --mon-osd-full-ratio=.99 --mon-data-avail-crit=1 --mon-data-avail-warn=5 --paxos-propose-interval=0.1 --osd-crush-chooseleaf-type=0 --debug-mon 20 --debug-ms 20 --debug-paxos 20 --chdir= --mon-data=td/mon-stretched-cluster-uneven-weight/e '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --mon-cluster-log-file=td/mon-stretched-cluster-uneven-weight/log --run-dir=td/mon-stretched-cluster-uneven-weight '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --mon-allow-pool-delete --mon-allow-pool-size-one --osd-pool-default-pg-autoscale-mode off --mon-osd-backfillfull-ratio .99 --mon-warn-on-insecure-global-id-reclaim-allowed=false --public-addr 127.0.0.1:7144 2026-03-08T22:46:27.522 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: cat 2026-03-08T22:46:27.522 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon e fsid 2026-03-08T22:46:27.522 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:46:27.522 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=e 2026-03-08T22:46:27.522 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=fsid 2026-03-08T22:46:27.523 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.e 2026-03-08T22:46:27.523 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .fsid 2026-03-08T22:46:27.523 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.e 2026-03-08T22:46:27.523 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.e ']' 2026-03-08T22:46:27.523 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:46:27.523 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:27.523 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:27.523 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.68828/ceph-mon.e.asok 2026-03-08T22:46:27.523 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:46:27.523 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.68828/ceph-mon.e.asok config get fsid 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:487: run_mon: get_config mon e mon_host 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1125: get_config: local daemon=mon 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1126: get_config: local id=e 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1127: get_config: local config=mon_host 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1132: get_config: jq -r .mon_host 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: get_asok_path mon.e 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name=mon.e 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n mon.e ']' 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: get_asok_dir 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr:////home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:118: get_asok_path: echo /tmp/ceph-asok.68828/ceph-mon.e.asok 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: CEPH_ARGS= 2026-03-08T22:46:27.592 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1129: get_config: ceph --format json daemon /tmp/ceph-asok.68828/ceph-mon.e.asok config get mon_host 2026-03-08T22:46:27.642 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:48: TEST_stretched_cluster_uneven_weight: CEPH_ARGS='--fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:46:27.642 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:49: TEST_stretched_cluster_uneven_weight: wait_for_quorum 300 5 2026-03-08T22:46:27.642 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1055: wait_for_quorum: local timeout=300 2026-03-08T22:46:27.642 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1056: wait_for_quorum: local quorumsize=5 2026-03-08T22:46:27.642 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1058: wait_for_quorum: [[ -z 300 ]] 2026-03-08T22:46:27.643 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1062: wait_for_quorum: [[ -z 5 ]] 2026-03-08T22:46:27.643 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1067: wait_for_quorum: no_quorum=1 2026-03-08T22:46:27.643 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: date +%s 2026-03-08T22:46:27.643 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1068: wait_for_quorum: wait_until=1773010287 2026-03-08T22:46:27.643 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:46:27.644 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009987 -lt 1773010287 ]] 2026-03-08T22:46:27.644 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 5' 2026-03-08T22:46:27.644 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:46:33.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:46:33.763 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":18,"quorum":[0,1,2,3],"quorum_names":["a","b","c","d"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":5,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:27.528809Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":4,"name":"e","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7144","nonce":0}]},"addr":"127.0.0.1:7144/0","public_addr":"127.0.0.1:7144/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:33.764 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 5' 2026-03-08T22:46:33.764 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":18,"quorum":[0,1,2,3],"quorum_names":["a","b","c","d"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":5,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:27.528809Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":4,"name":"e","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7144","nonce":0}]},"addr":"127.0.0.1:7144/0","public_addr":"127.0.0.1:7144/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:33.766 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=false 2026-03-08T22:46:33.766 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ false == \t\r\u\e ]] 2026-03-08T22:46:33.766 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:46:33.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009993 -lt 1773010287 ]] 2026-03-08T22:46:33.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 5' 2026-03-08T22:46:33.767 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:46:33.892 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:46:33.893 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":18,"quorum":[0,1,2,3],"quorum_names":["a","b","c","d"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":5,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:27.528809Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":4,"name":"e","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7144","nonce":0}]},"addr":"127.0.0.1:7144/0","public_addr":"127.0.0.1:7144/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:33.893 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":18,"quorum":[0,1,2,3],"quorum_names":["a","b","c","d"],"quorum_leader_name":"a","quorum_age":1,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":5,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:27.528809Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":4,"name":"e","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7144","nonce":0}]},"addr":"127.0.0.1:7144/0","public_addr":"127.0.0.1:7144/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:33.893 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 5' 2026-03-08T22:46:33.895 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=false 2026-03-08T22:46:33.895 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ false == \t\r\u\e ]] 2026-03-08T22:46:33.896 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: date +%s 2026-03-08T22:46:33.896 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1069: wait_for_quorum: [[ 1773009993 -lt 1773010287 ]] 2026-03-08T22:46:33.896 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1070: wait_for_quorum: jqfilter='.quorum | length == 5' 2026-03-08T22:46:33.897 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: timeout 300 ceph quorum_status --format=json 2026-03-08T22:46:35.415 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1071: wait_for_quorum: jqinput=' 2026-03-08T22:46:35.415 INFO:tasks.workunit.client.0.vm00.stderr:{"election_epoch":20,"quorum":[0,1,2,3,4],"quorum_names":["a","b","c","d","e"],"quorum_leader_name":"a","quorum_age":0,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":5,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:27.528809Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":4,"name":"e","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7144","nonce":0}]},"addr":"127.0.0.1:7144/0","public_addr":"127.0.0.1:7144/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:35.415 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: echo '{"election_epoch":20,"quorum":[0,1,2,3,4],"quorum_names":["a","b","c","d","e"],"quorum_leader_name":"a","quorum_age":0,"features":{"quorum_con":"4540701547738038271","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"]},"monmap":{"epoch":5,"fsid":"3362c658-ff13-4ba2-bff3-d87f427a3068","modified":"2026-03-08T22:46:27.528809Z","created":"2026-03-08T22:46:01.790407Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7139","nonce":0}]},"addr":"127.0.0.1:7139/0","public_addr":"127.0.0.1:7139/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7141","nonce":0}]},"addr":"127.0.0.1:7141/0","public_addr":"127.0.0.1:7141/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7142","nonce":0}]},"addr":"127.0.0.1:7142/0","public_addr":"127.0.0.1:7142/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":3,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7143","nonce":0}]},"addr":"127.0.0.1:7143/0","public_addr":"127.0.0.1:7143/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":4,"name":"e","public_addrs":{"addrvec":[{"type":"v2","addr":"127.0.0.1:7144","nonce":0}]},"addr":"127.0.0.1:7144/0","public_addr":"127.0.0.1:7144/0","priority":0,"weight":0,"crush_location":"{}"}]}}' 2026-03-08T22:46:35.415 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: jq '.quorum | length == 5' 2026-03-08T22:46:35.417 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1072: wait_for_quorum: res=true 2026-03-08T22:46:35.417 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1073: wait_for_quorum: [[ true == \t\r\u\e ]] 2026-03-08T22:46:35.417 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1074: wait_for_quorum: no_quorum=0 2026-03-08T22:46:35.417 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1075: wait_for_quorum: break 2026-03-08T22:46:35.417 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1078: wait_for_quorum: return 0 2026-03-08T22:46:35.417 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:51: TEST_stretched_cluster_uneven_weight: ceph mon set election_strategy connectivity 2026-03-08T22:46:40.536 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:52: TEST_stretched_cluster_uneven_weight: ceph mon add disallowed_leader e 2026-03-08T22:46:40.677 INFO:tasks.workunit.client.0.vm00.stderr:mon.e is already disallowed 2026-03-08T22:46:40.685 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:54: TEST_stretched_cluster_uneven_weight: run_mgr td/mon-stretched-cluster-uneven-weight x 2026-03-08T22:46:40.685 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:553: run_mgr: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:40.685 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:554: run_mgr: shift 2026-03-08T22:46:40.685 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:555: run_mgr: local id=x 2026-03-08T22:46:40.685 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:556: run_mgr: shift 2026-03-08T22:46:40.685 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:557: run_mgr: local data=td/mon-stretched-cluster-uneven-weight/x 2026-03-08T22:46:40.685 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:559: run_mgr: ceph config set mgr mgr_pool false --force 2026-03-08T22:46:40.840 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: get_asok_path 2026-03-08T22:46:40.840 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:46:40.840 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:46:40.840 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:46:40.840 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:40.840 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:40.840 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:40.841 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: realpath /home/ubuntu/cephtest/clone.client.0/src/pybind/mgr 2026-03-08T22:46:40.842 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: ceph-mgr --id x --osd-failsafe-full-ratio=.99 --debug-mgr 20 --debug-objecter 20 --debug-ms 20 --debug-paxos 20 --chdir= --mgr-data=td/mon-stretched-cluster-uneven-weight/x '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --run-dir=td/mon-stretched-cluster-uneven-weight '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --mgr-module-path=/home/ubuntu/cephtest/clone.client.0/src/pybind/mgr 2026-03-08T22:46:40.864 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:55: TEST_stretched_cluster_uneven_weight: run_mgr td/mon-stretched-cluster-uneven-weight y 2026-03-08T22:46:40.864 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:553: run_mgr: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:40.864 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:554: run_mgr: shift 2026-03-08T22:46:40.865 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:555: run_mgr: local id=y 2026-03-08T22:46:40.865 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:556: run_mgr: shift 2026-03-08T22:46:40.865 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:557: run_mgr: local data=td/mon-stretched-cluster-uneven-weight/y 2026-03-08T22:46:40.865 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:559: run_mgr: ceph config set mgr mgr_pool false --force 2026-03-08T22:46:40.993 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: get_asok_path 2026-03-08T22:46:40.993 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:46:40.993 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:46:40.994 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:46:40.994 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:40.994 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:40.994 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:40.994 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: realpath /home/ubuntu/cephtest/clone.client.0/src/pybind/mgr 2026-03-08T22:46:40.995 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: ceph-mgr --id y --osd-failsafe-full-ratio=.99 --debug-mgr 20 --debug-objecter 20 --debug-ms 20 --debug-paxos 20 --chdir= --mgr-data=td/mon-stretched-cluster-uneven-weight/y '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --run-dir=td/mon-stretched-cluster-uneven-weight '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --mgr-module-path=/home/ubuntu/cephtest/clone.client.0/src/pybind/mgr 2026-03-08T22:46:41.020 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:56: TEST_stretched_cluster_uneven_weight: run_mgr td/mon-stretched-cluster-uneven-weight z 2026-03-08T22:46:41.020 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:553: run_mgr: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:41.020 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:554: run_mgr: shift 2026-03-08T22:46:41.020 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:555: run_mgr: local id=z 2026-03-08T22:46:41.020 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:556: run_mgr: shift 2026-03-08T22:46:41.020 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:557: run_mgr: local data=td/mon-stretched-cluster-uneven-weight/z 2026-03-08T22:46:41.020 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:559: run_mgr: ceph config set mgr mgr_pool false --force 2026-03-08T22:46:41.256 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: get_asok_path 2026-03-08T22:46:41.256 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:46:41.256 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:46:41.257 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:46:41.257 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:41.258 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:41.258 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:41.258 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: realpath /home/ubuntu/cephtest/clone.client.0/src/pybind/mgr 2026-03-08T22:46:41.259 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:561: run_mgr: ceph-mgr --id z --osd-failsafe-full-ratio=.99 --debug-mgr 20 --debug-objecter 20 --debug-ms 20 --debug-paxos 20 --chdir= --mgr-data=td/mon-stretched-cluster-uneven-weight/z '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --run-dir=td/mon-stretched-cluster-uneven-weight '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --mgr-module-path=/home/ubuntu/cephtest/clone.client.0/src/pybind/mgr 2026-03-08T22:46:41.301 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:58: TEST_stretched_cluster_uneven_weight: expr 4 - 1 2026-03-08T22:46:41.303 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:58: TEST_stretched_cluster_uneven_weight: seq 0 3 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:58: TEST_stretched_cluster_uneven_weight: for osd in $(seq 0 $(expr $OSDS - 1)) 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:60: TEST_stretched_cluster_uneven_weight: run_osd td/mon-stretched-cluster-uneven-weight 0 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=0 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretched-cluster-uneven-weight/0 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretched-cluster-uneven-weight/0' 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretched-cluster-uneven-weight/0/journal' 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretched-cluster-uneven-weight' 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:46:41.304 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretched-cluster-uneven-weight/$name.log' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:46:41.306 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretched-cluster-uneven-weight/0 2026-03-08T22:46:41.307 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:46:41.309 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=86a7402a-6132-473a-ae74-bf432c614238 2026-03-08T22:46:41.309 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd0 86a7402a-6132-473a-ae74-bf432c614238' 2026-03-08T22:46:41.309 INFO:tasks.workunit.client.0.vm00.stdout:add osd0 86a7402a-6132-473a-ae74-bf432c614238 2026-03-08T22:46:41.310 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:46:41.336 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQBR/K1pAZqfExAADbCAQBGmvL45waUY7M3ITA== 2026-03-08T22:46:41.337 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQBR/K1pAZqfExAADbCAQBGmvL45waUY7M3ITA=="}' 2026-03-08T22:46:41.337 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new 86a7402a-6132-473a-ae74-bf432c614238 -i td/mon-stretched-cluster-uneven-weight/0/new.json 2026-03-08T22:46:41.604 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:46:41.615 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretched-cluster-uneven-weight/0/new.json 2026-03-08T22:46:41.619 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 0 --fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretched-cluster-uneven-weight/0 --osd-journal=td/mon-stretched-cluster-uneven-weight/0/journal --chdir= --run-dir=td/mon-stretched-cluster-uneven-weight '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQBR/K1pAZqfExAADbCAQBGmvL45waUY7M3ITA== --osd-uuid 86a7402a-6132-473a-ae74-bf432c614238 2026-03-08T22:46:41.659 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:41.655+0000 7f0a2d65f780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:46:41.660 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:41.660+0000 7f0a2d65f780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:46:41.663 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:41.662+0000 7f0a2d65f780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:46:41.663 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:41.662+0000 7f0a2d65f780 -1 bdev(0x55ca33ae2800 td/mon-stretched-cluster-uneven-weight/0/block) open stat got: (1) Operation not permitted 2026-03-08T22:46:41.663 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:41.663+0000 7f0a2d65f780 -1 bluestore(td/mon-stretched-cluster-uneven-weight/0) _read_fsid unparsable uuid 2026-03-08T22:46:44.656 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretched-cluster-uneven-weight/0/keyring 2026-03-08T22:46:44.656 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:46:44.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd0 key to auth repository 2026-03-08T22:46:44.657 INFO:tasks.workunit.client.0.vm00.stdout:adding osd0 key to auth repository 2026-03-08T22:46:44.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretched-cluster-uneven-weight/0/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:46:44.936 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.0 2026-03-08T22:46:44.936 INFO:tasks.workunit.client.0.vm00.stdout:start osd.0 2026-03-08T22:46:44.938 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 0 --fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretched-cluster-uneven-weight/0 --osd-journal=td/mon-stretched-cluster-uneven-weight/0/journal --chdir= --run-dir=td/mon-stretched-cluster-uneven-weight '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:46:44.938 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:46:44.938 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:46:44.941 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:46:45.026 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:45.025+0000 7fc0cfe0f780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:46:45.027 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:45.027+0000 7fc0cfe0f780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:46:45.029 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:45.028+0000 7fc0cfe0f780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:46:45.245 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 0 2026-03-08T22:46:45.245 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:46:45.245 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=0 2026-03-08T22:46:45.245 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:46:45.245 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:46:45.245 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:46:45.245 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:46:45.245 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:46:45.245 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:46:45.245 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.0 up' 2026-03-08T22:46:45.492 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:46:45.843 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:45.843+0000 7fc0cfe0f780 -1 Falling back to public interface 2026-03-08T22:46:46.494 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:46:46.494 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:46:46.494 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:46:46.494 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:46:46.494 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:46:46.494 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.0 up' 2026-03-08T22:46:46.735 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:46:46.953 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:46.953+0000 7fc0cfe0f780 -1 osd.0 0 log_to_monitors true 2026-03-08T22:46:47.737 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:46:47.738 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:46:47.738 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:46:47.738 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 2 2026-03-08T22:46:47.738 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:46:47.738 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.0 up' 2026-03-08T22:46:48.003 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:46:48.044 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:48.044+0000 7fc0cb5ae640 -1 osd.0 0 waiting for initial osdmap 2026-03-08T22:46:49.005 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:46:49.005 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:46:49.005 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:46:49.005 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 3 2026-03-08T22:46:49.005 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.0 up' 2026-03-08T22:46:49.005 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:46:49.246 INFO:tasks.workunit.client.0.vm00.stdout:osd.0 up in weight 1 up_from 5 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6802/4007032637,v1:127.0.0.1:6803/4007032637] [v2:127.0.0.1:6804/4007032637,v1:127.0.0.1:6805/4007032637] exists,up 86a7402a-6132-473a-ae74-bf432c614238 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:58: TEST_stretched_cluster_uneven_weight: for osd in $(seq 0 $(expr $OSDS - 1)) 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:60: TEST_stretched_cluster_uneven_weight: run_osd td/mon-stretched-cluster-uneven-weight 1 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=1 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretched-cluster-uneven-weight/1 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretched-cluster-uneven-weight/1' 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretched-cluster-uneven-weight/1/journal' 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:46:49.247 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretched-cluster-uneven-weight' 2026-03-08T22:46:49.248 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:46:49.248 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:46:49.248 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:46:49.248 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:46:49.248 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:49.248 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:49.248 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:49.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:49.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:46:49.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:46:49.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:46:49.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretched-cluster-uneven-weight/$name.log' 2026-03-08T22:46:49.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' 2026-03-08T22:46:49.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:46:49.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:46:49.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:46:49.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:46:49.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:46:49.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:46:49.249 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretched-cluster-uneven-weight/1 2026-03-08T22:46:49.251 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:46:49.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=ebcb7d8e-5568-4b55-ac06-95e63aa05fe9 2026-03-08T22:46:49.251 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd1 ebcb7d8e-5568-4b55-ac06-95e63aa05fe9' 2026-03-08T22:46:49.251 INFO:tasks.workunit.client.0.vm00.stdout:add osd1 ebcb7d8e-5568-4b55-ac06-95e63aa05fe9 2026-03-08T22:46:49.252 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:46:49.271 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQBZ/K1pnF0tEBAAA+CigGqbKeaUKVT1+QOiuw== 2026-03-08T22:46:49.272 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQBZ/K1pnF0tEBAAA+CigGqbKeaUKVT1+QOiuw=="}' 2026-03-08T22:46:49.272 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new ebcb7d8e-5568-4b55-ac06-95e63aa05fe9 -i td/mon-stretched-cluster-uneven-weight/1/new.json 2026-03-08T22:46:49.516 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:46:49.527 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretched-cluster-uneven-weight/1/new.json 2026-03-08T22:46:49.528 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 1 --fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretched-cluster-uneven-weight/1 --osd-journal=td/mon-stretched-cluster-uneven-weight/1/journal --chdir= --run-dir=td/mon-stretched-cluster-uneven-weight '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQBZ/K1pnF0tEBAAA+CigGqbKeaUKVT1+QOiuw== --osd-uuid ebcb7d8e-5568-4b55-ac06-95e63aa05fe9 2026-03-08T22:46:49.549 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:49.549+0000 7fc0bba09780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:46:49.551 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:49.551+0000 7fc0bba09780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:46:49.553 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:49.553+0000 7fc0bba09780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:46:49.553 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:49.553+0000 7fc0bba09780 -1 bdev(0x564e07565c00 td/mon-stretched-cluster-uneven-weight/1/block) open stat got: (1) Operation not permitted 2026-03-08T22:46:49.553 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:49.553+0000 7fc0bba09780 -1 bluestore(td/mon-stretched-cluster-uneven-weight/1) _read_fsid unparsable uuid 2026-03-08T22:46:52.202 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretched-cluster-uneven-weight/1/keyring 2026-03-08T22:46:52.202 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:46:52.203 INFO:tasks.workunit.client.0.vm00.stdout:adding osd1 key to auth repository 2026-03-08T22:46:52.204 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd1 key to auth repository 2026-03-08T22:46:52.204 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretched-cluster-uneven-weight/1/keyring auth add osd.1 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:46:55.512 INFO:tasks.workunit.client.0.vm00.stdout:start osd.1 2026-03-08T22:46:55.512 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.1 2026-03-08T22:46:55.512 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 1 --fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretched-cluster-uneven-weight/1 --osd-journal=td/mon-stretched-cluster-uneven-weight/1/journal --chdir= --run-dir=td/mon-stretched-cluster-uneven-weight '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:46:55.512 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:46:55.514 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:46:55.517 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:46:55.531 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:55.531+0000 7f2139e14780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:46:55.539 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:55.539+0000 7f2139e14780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:46:55.542 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:55.541+0000 7f2139e14780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:46:55.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 1 2026-03-08T22:46:55.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:46:55.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=1 2026-03-08T22:46:55.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:46:55.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:46:55.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:46:55.767 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:46:55.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:46:55.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:46:55.767 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.1 up' 2026-03-08T22:46:56.006 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:46:56.876 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:56.875+0000 7f2139e14780 -1 Falling back to public interface 2026-03-08T22:46:57.007 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:46:57.007 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:46:57.007 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:46:57.007 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:46:57.008 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:46:57.008 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.1 up' 2026-03-08T22:46:57.240 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:46:57.987 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:57.987+0000 7f2139e14780 -1 osd.1 0 log_to_monitors true 2026-03-08T22:46:58.242 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:46:58.243 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:46:58.243 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:46:58.243 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 2 2026-03-08T22:46:58.243 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:46:58.243 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.1 up' 2026-03-08T22:46:58.531 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:46:59.141 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:46:59.141+0000 7f2134f80640 -1 osd.1 0 waiting for initial osdmap 2026-03-08T22:46:59.534 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:46:59.534 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:46:59.534 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:46:59.534 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 3 2026-03-08T22:46:59.534 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:46:59.534 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.1 up' 2026-03-08T22:46:59.760 INFO:tasks.workunit.client.0.vm00.stdout:osd.1 up in weight 1 up_from 10 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6810/1599414019,v1:127.0.0.1:6811/1599414019] [v2:127.0.0.1:6812/1599414019,v1:127.0.0.1:6813/1599414019] exists,up ebcb7d8e-5568-4b55-ac06-95e63aa05fe9 2026-03-08T22:46:59.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:46:59.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:46:59.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:46:59.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:58: TEST_stretched_cluster_uneven_weight: for osd in $(seq 0 $(expr $OSDS - 1)) 2026-03-08T22:46:59.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:60: TEST_stretched_cluster_uneven_weight: run_osd td/mon-stretched-cluster-uneven-weight 2 2026-03-08T22:46:59.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:46:59.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:46:59.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=2 2026-03-08T22:46:59.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:46:59.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretched-cluster-uneven-weight/2 2026-03-08T22:46:59.760 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretched-cluster-uneven-weight/2' 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretched-cluster-uneven-weight/2/journal' 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretched-cluster-uneven-weight' 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:46:59.761 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:46:59.762 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:59.762 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:46:59.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:46:59.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:46:59.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:46:59.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretched-cluster-uneven-weight/$name.log' 2026-03-08T22:46:59.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' 2026-03-08T22:46:59.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:46:59.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:46:59.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:46:59.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:46:59.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:46:59.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:46:59.763 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretched-cluster-uneven-weight/2 2026-03-08T22:46:59.764 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:46:59.765 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=c01061bf-7f37-4c6d-b21d-aa067e4413ab 2026-03-08T22:46:59.765 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd2 c01061bf-7f37-4c6d-b21d-aa067e4413ab' 2026-03-08T22:46:59.765 INFO:tasks.workunit.client.0.vm00.stdout:add osd2 c01061bf-7f37-4c6d-b21d-aa067e4413ab 2026-03-08T22:46:59.765 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:46:59.778 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQBj/K1pvexjLhAAJem85WtMmR1xCnUsAyQ3iw== 2026-03-08T22:46:59.778 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQBj/K1pvexjLhAAJem85WtMmR1xCnUsAyQ3iw=="}' 2026-03-08T22:46:59.778 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new c01061bf-7f37-4c6d-b21d-aa067e4413ab -i td/mon-stretched-cluster-uneven-weight/2/new.json 2026-03-08T22:47:00.040 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:47:00.062 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretched-cluster-uneven-weight/2/new.json 2026-03-08T22:47:00.063 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 2 --fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretched-cluster-uneven-weight/2 --osd-journal=td/mon-stretched-cluster-uneven-weight/2/journal --chdir= --run-dir=td/mon-stretched-cluster-uneven-weight '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQBj/K1pvexjLhAAJem85WtMmR1xCnUsAyQ3iw== --osd-uuid c01061bf-7f37-4c6d-b21d-aa067e4413ab 2026-03-08T22:47:00.083 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:00.083+0000 7fac0a1a1780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:47:00.085 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:00.085+0000 7fac0a1a1780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:47:00.086 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:00.086+0000 7fac0a1a1780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:47:00.087 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:00.087+0000 7fac0a1a1780 -1 bdev(0x55d0474b3c00 td/mon-stretched-cluster-uneven-weight/2/block) open stat got: (1) Operation not permitted 2026-03-08T22:47:00.087 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:00.087+0000 7fac0a1a1780 -1 bluestore(td/mon-stretched-cluster-uneven-weight/2) _read_fsid unparsable uuid 2026-03-08T22:47:02.260 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretched-cluster-uneven-weight/2/keyring 2026-03-08T22:47:02.260 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:47:02.261 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd2 key to auth repository 2026-03-08T22:47:02.261 INFO:tasks.workunit.client.0.vm00.stdout:adding osd2 key to auth repository 2026-03-08T22:47:02.261 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretched-cluster-uneven-weight/2/keyring auth add osd.2 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:47:02.576 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.2 2026-03-08T22:47:02.576 INFO:tasks.workunit.client.0.vm00.stdout:start osd.2 2026-03-08T22:47:02.577 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 2 --fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretched-cluster-uneven-weight/2 --osd-journal=td/mon-stretched-cluster-uneven-weight/2/journal --chdir= --run-dir=td/mon-stretched-cluster-uneven-weight '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:47:02.577 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:47:02.578 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:47:02.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:47:02.603 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:02.595+0000 7f529ae14780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:47:02.604 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:02.604+0000 7f529ae14780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:47:02.606 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:02.605+0000 7f529ae14780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:47:02.816 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 2 2026-03-08T22:47:02.816 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:47:02.816 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=2 2026-03-08T22:47:02.816 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:47:02.816 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:47:02.816 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:47:02.816 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:47:02.817 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:47:02.817 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:47:02.817 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.2 up' 2026-03-08T22:47:03.082 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:47:03.943 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:03.943+0000 7f529ae14780 -1 Falling back to public interface 2026-03-08T22:47:04.085 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:47:04.085 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:47:04.085 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:47:04.085 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:47:04.085 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:47:04.085 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.2 up' 2026-03-08T22:47:04.326 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:47:04.795 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:04.795+0000 7f529ae14780 -1 osd.2 0 log_to_monitors true 2026-03-08T22:47:05.328 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:47:05.328 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:47:05.328 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:47:05.328 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 2 2026-03-08T22:47:05.329 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:47:05.329 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.2 up' 2026-03-08T22:47:05.579 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:47:06.582 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:47:06.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:47:06.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:47:06.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 3 2026-03-08T22:47:06.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.2 up' 2026-03-08T22:47:06.582 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:47:06.851 INFO:tasks.workunit.client.0.vm00.stdout:osd.2 up in weight 1 up_from 15 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6818/2931673971,v1:127.0.0.1:6819/2931673971] [v2:127.0.0.1:6820/2931673971,v1:127.0.0.1:6821/2931673971] exists,up c01061bf-7f37-4c6d-b21d-aa067e4413ab 2026-03-08T22:47:06.851 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:47:06.851 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:47:06.851 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:47:06.851 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:58: TEST_stretched_cluster_uneven_weight: for osd in $(seq 0 $(expr $OSDS - 1)) 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:60: TEST_stretched_cluster_uneven_weight: run_osd td/mon-stretched-cluster-uneven-weight 3 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:633: run_osd: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:634: run_osd: shift 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:635: run_osd: local id=3 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:636: run_osd: shift 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:637: run_osd: local osd_data=td/mon-stretched-cluster-uneven-weight/3 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:639: run_osd: local 'ceph_args=--fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144' 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:640: run_osd: ceph_args+=' --osd-failsafe-full-ratio=.99' 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:641: run_osd: ceph_args+=' --osd-journal-size=100' 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:642: run_osd: ceph_args+=' --osd-scrub-load-threshold=2000' 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:643: run_osd: ceph_args+=' --osd-data=td/mon-stretched-cluster-uneven-weight/3' 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:644: run_osd: ceph_args+=' --osd-journal=td/mon-stretched-cluster-uneven-weight/3/journal' 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:645: run_osd: ceph_args+=' --chdir=' 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:646: run_osd: ceph_args+= 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:647: run_osd: ceph_args+=' --run-dir=td/mon-stretched-cluster-uneven-weight' 2026-03-08T22:47:06.852 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: get_asok_path 2026-03-08T22:47:06.853 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:116: get_asok_path: local name= 2026-03-08T22:47:06.853 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:117: get_asok_path: '[' -n '' ']' 2026-03-08T22:47:06.853 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: get_asok_dir 2026-03-08T22:47:06.853 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:47:06.853 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:47:06.853 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:120: get_asok_path: echo '/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:47:06.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:648: run_osd: ceph_args+=' --admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' 2026-03-08T22:47:06.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:649: run_osd: ceph_args+=' --debug-osd=20' 2026-03-08T22:47:06.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:650: run_osd: ceph_args+=' --debug-ms=1' 2026-03-08T22:47:06.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:651: run_osd: ceph_args+=' --debug-monc=20' 2026-03-08T22:47:06.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:652: run_osd: ceph_args+=' --log-file=td/mon-stretched-cluster-uneven-weight/$name.log' 2026-03-08T22:47:06.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:653: run_osd: ceph_args+=' --pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' 2026-03-08T22:47:06.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:654: run_osd: ceph_args+=' --osd-max-object-name-len=460' 2026-03-08T22:47:06.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:655: run_osd: ceph_args+=' --osd-max-object-namespace-len=64' 2026-03-08T22:47:06.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:656: run_osd: ceph_args+=' --enable-experimental-unrecoverable-data-corrupting-features=*' 2026-03-08T22:47:06.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:657: run_osd: ceph_args+=' --osd-mclock-profile=high_recovery_ops' 2026-03-08T22:47:06.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:658: run_osd: ceph_args+=' ' 2026-03-08T22:47:06.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:659: run_osd: ceph_args+= 2026-03-08T22:47:06.854 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:660: run_osd: mkdir -p td/mon-stretched-cluster-uneven-weight/3 2026-03-08T22:47:06.856 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: uuidgen 2026-03-08T22:47:06.856 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:662: run_osd: local uuid=bdb0294a-1b15-4dc2-ba94-8aea079e247f 2026-03-08T22:47:06.856 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:663: run_osd: echo 'add osd3 bdb0294a-1b15-4dc2-ba94-8aea079e247f' 2026-03-08T22:47:06.857 INFO:tasks.workunit.client.0.vm00.stdout:add osd3 bdb0294a-1b15-4dc2-ba94-8aea079e247f 2026-03-08T22:47:06.857 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: ceph-authtool --gen-print-key 2026-03-08T22:47:06.877 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:664: run_osd: OSD_SECRET=AQBq/K1p/hc1NBAAHedKJ7LqSfemVKKzrSLNBQ== 2026-03-08T22:47:06.878 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:665: run_osd: echo '{"cephx_secret": "AQBq/K1p/hc1NBAAHedKJ7LqSfemVKKzrSLNBQ=="}' 2026-03-08T22:47:06.878 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:666: run_osd: ceph osd new bdb0294a-1b15-4dc2-ba94-8aea079e247f -i td/mon-stretched-cluster-uneven-weight/3/new.json 2026-03-08T22:47:07.189 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:47:07.199 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:667: run_osd: rm td/mon-stretched-cluster-uneven-weight/3/new.json 2026-03-08T22:47:07.202 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:668: run_osd: ceph-osd -i 3 --fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretched-cluster-uneven-weight/3 --osd-journal=td/mon-stretched-cluster-uneven-weight/3/journal --chdir= --run-dir=td/mon-stretched-cluster-uneven-weight '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops --mkfs --key AQBq/K1p/hc1NBAAHedKJ7LqSfemVKKzrSLNBQ== --osd-uuid bdb0294a-1b15-4dc2-ba94-8aea079e247f 2026-03-08T22:47:07.221 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:07.221+0000 7f2e0261b780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:47:10.225 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:10.225+0000 7f2e0261b780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:47:10.227 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:10.226+0000 7f2e0261b780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:47:10.227 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:10.226+0000 7f2e0261b780 -1 bdev(0x56380edb0c00 td/mon-stretched-cluster-uneven-weight/3/block) open stat got: (1) Operation not permitted 2026-03-08T22:47:10.227 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:10.227+0000 7f2e0261b780 -1 bluestore(td/mon-stretched-cluster-uneven-weight/3) _read_fsid unparsable uuid 2026-03-08T22:47:12.599 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:670: run_osd: local key_fn=td/mon-stretched-cluster-uneven-weight/3/keyring 2026-03-08T22:47:12.599 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:671: run_osd: cat 2026-03-08T22:47:12.600 INFO:tasks.workunit.client.0.vm00.stdout:adding osd3 key to auth repository 2026-03-08T22:47:12.600 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:675: run_osd: echo adding osd3 key to auth repository 2026-03-08T22:47:12.600 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:676: run_osd: ceph -i td/mon-stretched-cluster-uneven-weight/3/keyring auth add osd.3 osd 'allow *' mon 'allow profile osd' mgr 'allow profile osd' 2026-03-08T22:47:12.892 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:677: run_osd: echo start osd.3 2026-03-08T22:47:12.893 INFO:tasks.workunit.client.0.vm00.stdout:start osd.3 2026-03-08T22:47:12.893 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:678: run_osd: ceph-osd -i 3 --fsid=3362c658-ff13-4ba2-bff3-d87f427a3068 --auth-supported=none --mon-host=127.0.0.1:7139,127.0.0.1:7141,127.0.0.1:7142,127.0.0.1:7143,127.0.0.1:7144 --osd-failsafe-full-ratio=.99 --osd-journal-size=100 --osd-scrub-load-threshold=2000 --osd-data=td/mon-stretched-cluster-uneven-weight/3 --osd-journal=td/mon-stretched-cluster-uneven-weight/3/journal --chdir= --run-dir=td/mon-stretched-cluster-uneven-weight '--admin-socket=/tmp/ceph-asok.68828/$cluster-$name.asok' --debug-osd=20 --debug-ms=1 --debug-monc=20 '--log-file=td/mon-stretched-cluster-uneven-weight/$name.log' '--pid-file=td/mon-stretched-cluster-uneven-weight/$name.pid' --osd-max-object-name-len=460 --osd-max-object-namespace-len=64 '--enable-experimental-unrecoverable-data-corrupting-features=*' --osd-mclock-profile=high_recovery_ops 2026-03-08T22:47:12.893 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: jq '.flags_set[]' 2026-03-08T22:47:12.894 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: ceph osd dump --format=json 2026-03-08T22:47:12.896 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:681: run_osd: grep -q '"noup"' 2026-03-08T22:47:12.912 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:12.911+0000 7f262340d780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:47:12.915 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:12.915+0000 7f262340d780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:47:12.918 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:12.917+0000 7f262340d780 -1 WARNING: all dangerous and experimental features are enabled. 2026-03-08T22:47:13.166 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:684: run_osd: wait_for_osd up 3 2026-03-08T22:47:13.166 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:978: wait_for_osd: local state=up 2026-03-08T22:47:13.166 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:979: wait_for_osd: local id=3 2026-03-08T22:47:13.166 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:981: wait_for_osd: status=1 2026-03-08T22:47:13.166 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i=0 )) 2026-03-08T22:47:13.166 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:47:13.166 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 0 2026-03-08T22:47:13.166 INFO:tasks.workunit.client.0.vm00.stdout:0 2026-03-08T22:47:13.166 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:47:13.166 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.3 up' 2026-03-08T22:47:13.418 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:47:13.726 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:13.726+0000 7f262340d780 -1 Falling back to public interface 2026-03-08T22:47:14.420 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:47:14.420 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:47:14.420 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 1 2026-03-08T22:47:14.420 INFO:tasks.workunit.client.0.vm00.stdout:1 2026-03-08T22:47:14.421 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:47:14.421 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.3 up' 2026-03-08T22:47:14.655 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:47:14.823 INFO:tasks.workunit.client.0.vm00.stderr:2026-03-08T22:47:14.822+0000 7f262340d780 -1 osd.3 0 log_to_monitors true 2026-03-08T22:47:15.656 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:47:15.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:47:15.657 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 2 2026-03-08T22:47:15.657 INFO:tasks.workunit.client.0.vm00.stdout:2 2026-03-08T22:47:15.658 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:47:15.658 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.3 up' 2026-03-08T22:47:15.887 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:985: wait_for_osd: sleep 1 2026-03-08T22:47:16.888 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i++ )) 2026-03-08T22:47:16.888 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:982: wait_for_osd: (( i < 300 )) 2026-03-08T22:47:16.889 INFO:tasks.workunit.client.0.vm00.stdout:3 2026-03-08T22:47:16.889 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:983: wait_for_osd: echo 3 2026-03-08T22:47:16.889 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: ceph osd dump 2026-03-08T22:47:16.889 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:984: wait_for_osd: grep 'osd.3 up' 2026-03-08T22:47:17.142 INFO:tasks.workunit.client.0.vm00.stdout:osd.3 up in weight 1 up_from 20 up_thru 0 down_at 0 last_clean_interval [0,0) [v2:127.0.0.1:6826/2026783391,v1:127.0.0.1:6827/2026783391] [v2:127.0.0.1:6828/2026783391,v1:127.0.0.1:6829/2026783391] exists,up bdb0294a-1b15-4dc2-ba94-8aea079e247f 2026-03-08T22:47:17.142 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:987: wait_for_osd: status=0 2026-03-08T22:47:17.142 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:988: wait_for_osd: break 2026-03-08T22:47:17.142 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:991: wait_for_osd: return 0 2026-03-08T22:47:17.142 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:63: TEST_stretched_cluster_uneven_weight: for zone in iris pze 2026-03-08T22:47:17.142 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:65: TEST_stretched_cluster_uneven_weight: ceph osd crush add-bucket iris zone 2026-03-08T22:47:17.401 INFO:tasks.workunit.client.0.vm00.stderr:bucket 'iris' already exists 2026-03-08T22:47:17.409 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:66: TEST_stretched_cluster_uneven_weight: ceph osd crush move iris root=default 2026-03-08T22:47:17.714 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id -5 name 'iris' to location {root=default} in crush map 2026-03-08T22:47:17.724 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:63: TEST_stretched_cluster_uneven_weight: for zone in iris pze 2026-03-08T22:47:17.724 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:65: TEST_stretched_cluster_uneven_weight: ceph osd crush add-bucket pze zone 2026-03-08T22:47:18.039 INFO:tasks.workunit.client.0.vm00.stderr:bucket 'pze' already exists 2026-03-08T22:47:18.049 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:66: TEST_stretched_cluster_uneven_weight: ceph osd crush move pze root=default 2026-03-08T22:47:18.334 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id -7 name 'pze' to location {root=default} in crush map 2026-03-08T22:47:18.344 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:69: TEST_stretched_cluster_uneven_weight: ceph osd crush add-bucket node-2 host 2026-03-08T22:47:18.640 INFO:tasks.workunit.client.0.vm00.stderr:bucket 'node-2' already exists 2026-03-08T22:47:18.649 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:70: TEST_stretched_cluster_uneven_weight: ceph osd crush add-bucket node-3 host 2026-03-08T22:47:18.951 INFO:tasks.workunit.client.0.vm00.stderr:bucket 'node-3' already exists 2026-03-08T22:47:18.962 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:71: TEST_stretched_cluster_uneven_weight: ceph osd crush add-bucket node-4 host 2026-03-08T22:47:22.254 INFO:tasks.workunit.client.0.vm00.stderr:bucket 'node-4' already exists 2026-03-08T22:47:22.265 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:72: TEST_stretched_cluster_uneven_weight: ceph osd crush add-bucket node-5 host 2026-03-08T22:47:22.549 INFO:tasks.workunit.client.0.vm00.stderr:bucket 'node-5' already exists 2026-03-08T22:47:22.560 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:74: TEST_stretched_cluster_uneven_weight: ceph osd crush move node-2 zone=iris 2026-03-08T22:47:22.840 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id -9 name 'node-2' to location {zone=iris} in crush map 2026-03-08T22:47:22.850 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:75: TEST_stretched_cluster_uneven_weight: ceph osd crush move node-3 zone=iris 2026-03-08T22:47:23.135 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id -10 name 'node-3' to location {zone=iris} in crush map 2026-03-08T22:47:23.143 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:76: TEST_stretched_cluster_uneven_weight: ceph osd crush move node-4 zone=pze 2026-03-08T22:47:23.421 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id -11 name 'node-4' to location {zone=pze} in crush map 2026-03-08T22:47:23.430 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:77: TEST_stretched_cluster_uneven_weight: ceph osd crush move node-5 zone=pze 2026-03-08T22:47:23.709 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id -12 name 'node-5' to location {zone=pze} in crush map 2026-03-08T22:47:23.718 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:79: TEST_stretched_cluster_uneven_weight: ceph osd crush move osd.0 host=node-2 2026-03-08T22:47:23.997 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id 0 name 'osd.0' to location {host=node-2} in crush map 2026-03-08T22:47:24.007 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:80: TEST_stretched_cluster_uneven_weight: ceph osd crush move osd.1 host=node-3 2026-03-08T22:47:24.295 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id 1 name 'osd.1' to location {host=node-3} in crush map 2026-03-08T22:47:24.304 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:81: TEST_stretched_cluster_uneven_weight: ceph osd crush move osd.2 host=node-4 2026-03-08T22:47:24.593 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id 2 name 'osd.2' to location {host=node-4} in crush map 2026-03-08T22:47:24.603 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:82: TEST_stretched_cluster_uneven_weight: ceph osd crush move osd.3 host=node-5 2026-03-08T22:47:24.895 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id 3 name 'osd.3' to location {host=node-5} in crush map 2026-03-08T22:47:24.905 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:84: TEST_stretched_cluster_uneven_weight: ceph mon set_location a zone=iris host=node-2 2026-03-08T22:47:30.211 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:85: TEST_stretched_cluster_uneven_weight: ceph mon set_location b zone=iris host=node-3 2026-03-08T22:47:30.599 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:86: TEST_stretched_cluster_uneven_weight: ceph mon set_location c zone=pze host=node-4 2026-03-08T22:47:41.933 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:87: TEST_stretched_cluster_uneven_weight: ceph mon set_location d zone=pze host=node-5 2026-03-08T22:47:42.309 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:89: TEST_stretched_cluster_uneven_weight: hostname -s 2026-03-08T22:47:42.310 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:89: TEST_stretched_cluster_uneven_weight: hostname=vm00 2026-03-08T22:47:42.310 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:90: TEST_stretched_cluster_uneven_weight: ceph osd crush remove vm00 2026-03-08T22:47:42.600 INFO:tasks.workunit.client.0.vm00.stderr:device 'vm00' does not appear in the crush map 2026-03-08T22:47:42.610 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:91: TEST_stretched_cluster_uneven_weight: ceph osd getcrushmap 2026-03-08T22:47:42.843 INFO:tasks.workunit.client.0.vm00.stderr:26 2026-03-08T22:47:42.852 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:92: TEST_stretched_cluster_uneven_weight: crushtool --decompile crushmap 2026-03-08T22:47:42.864 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:93: TEST_stretched_cluster_uneven_weight: sed 's/^# end crush map$//' crushmap.txt 2026-03-08T22:47:42.865 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:94: TEST_stretched_cluster_uneven_weight: cat 2026-03-08T22:47:42.866 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:110: TEST_stretched_cluster_uneven_weight: crushtool --compile crushmap_modified.txt -o crushmap.bin 2026-03-08T22:47:42.876 INFO:tasks.workunit.client.0.vm00.stderr:WARNING: min_size is no longer supported, ignoring 2026-03-08T22:47:42.876 INFO:tasks.workunit.client.0.vm00.stderr:WARNING: max_size is no longer supported, ignoring 2026-03-08T22:47:42.877 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:111: TEST_stretched_cluster_uneven_weight: ceph osd setcrushmap -i crushmap.bin 2026-03-08T22:47:43.222 INFO:tasks.workunit.client.0.vm00.stderr:28 2026-03-08T22:47:43.235 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:112: TEST_stretched_cluster_uneven_weight: local stretched_poolname=stretched_rbdpool 2026-03-08T22:47:43.236 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:113: TEST_stretched_cluster_uneven_weight: ceph osd pool create stretched_rbdpool 32 32 stretch_rule 2026-03-08T22:47:43.526 INFO:tasks.workunit.client.0.vm00.stderr:pool 'stretched_rbdpool' already exists 2026-03-08T22:47:43.536 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:114: TEST_stretched_cluster_uneven_weight: ceph osd pool set stretched_rbdpool size 4 2026-03-08T22:47:43.925 INFO:tasks.workunit.client.0.vm00.stderr:set pool 1 size to 4 2026-03-08T22:47:43.938 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:116: TEST_stretched_cluster_uneven_weight: ceph mon set_location e zone=arbiter host=node-1 2026-03-08T22:47:49.213 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:117: TEST_stretched_cluster_uneven_weight: ceph mon enable_stretch_mode e stretch_rule zone 2026-03-08T22:47:49.671 INFO:tasks.workunit.client.0.vm00.stderr:Second attempt of previously successful command failed with EINVAL: stretch mode is already engaged 2026-03-08T22:47:49.671 INFO:tasks.workunit.client.0.vm00.stderr:stretch mode is already engaged 2026-03-08T22:47:49.681 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:120: TEST_stretched_cluster_uneven_weight: ceph osd crush reweight osd.0 0.09000 2026-03-08T22:47:50.024 INFO:tasks.workunit.client.0.vm00.stderr:reweighted item id 0 name 'osd.0' to 0.09 in crush map 2026-03-08T22:47:50.042 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:121: TEST_stretched_cluster_uneven_weight: ceph osd crush reweight osd.1 0.09000 2026-03-08T22:47:50.361 INFO:tasks.workunit.client.0.vm00.stderr:reweighted item id 1 name 'osd.1' to 0.09 in crush map 2026-03-08T22:47:50.377 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:122: TEST_stretched_cluster_uneven_weight: ceph osd crush reweight osd.2 0.09000 2026-03-08T22:47:50.704 INFO:tasks.workunit.client.0.vm00.stderr:reweighted item id 2 name 'osd.2' to 0.09 in crush map 2026-03-08T22:47:50.716 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:123: TEST_stretched_cluster_uneven_weight: ceph osd crush reweight osd.3 0.09000 2026-03-08T22:47:51.011 INFO:tasks.workunit.client.0.vm00.stderr:reweighted item id 3 name 'osd.3' to 0.09 in crush map 2026-03-08T22:47:51.038 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:126: TEST_stretched_cluster_uneven_weight: ceph osd crush add-bucket sham zone 2026-03-08T22:47:51.327 INFO:tasks.workunit.client.0.vm00.stderr:bucket 'sham' already exists 2026-03-08T22:47:51.337 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:127: TEST_stretched_cluster_uneven_weight: ceph osd crush move sham root=default 2026-03-08T22:47:54.625 INFO:tasks.workunit.client.0.vm00.stderr:no need to move item id -3 name 'sham' to location {root=default} in crush map 2026-03-08T22:47:54.634 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:128: TEST_stretched_cluster_uneven_weight: wait_for_health INCORRECT_NUM_BUCKETS_STRETCH_MODE 2026-03-08T22:47:54.634 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1838: wait_for_health: local grepstr=INCORRECT_NUM_BUCKETS_STRETCH_MODE 2026-03-08T22:47:54.634 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1839: wait_for_health: get_timeout_delays 300 .1 2026-03-08T22:47:54.634 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1602: get_timeout_delays: shopt -q -o xtrace 2026-03-08T22:47:54.634 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1602: get_timeout_delays: echo true 2026-03-08T22:47:54.634 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1602: get_timeout_delays: local trace=true 2026-03-08T22:47:54.634 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1603: get_timeout_delays: true 2026-03-08T22:47:54.634 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1603: get_timeout_delays: shopt -u -o xtrace 2026-03-08T22:47:54.791 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1839: wait_for_health: delays=('0.1' '0.2' '0.4' '0.8' '1.6' '3.2' '6.4' '12.8' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '4.5') 2026-03-08T22:47:54.791 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1839: wait_for_health: local -a delays 2026-03-08T22:47:54.791 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1840: wait_for_health: local -i loop=0 2026-03-08T22:47:54.791 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1842: wait_for_health: ceph health detail 2026-03-08T22:47:54.791 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1842: wait_for_health: grep INCORRECT_NUM_BUCKETS_STRETCH_MODE 2026-03-08T22:47:55.103 INFO:tasks.workunit.client.0.vm00.stdout:[WRN] INCORRECT_NUM_BUCKETS_STRETCH_MODE: Stretch mode buckets != 2 2026-03-08T22:47:55.103 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:130: TEST_stretched_cluster_uneven_weight: ceph osd crush rm sham 2026-03-08T22:47:55.411 INFO:tasks.workunit.client.0.vm00.stderr:device 'sham' does not appear in the crush map 2026-03-08T22:47:55.421 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:131: TEST_stretched_cluster_uneven_weight: wait_for_health_gone INCORRECT_NUM_BUCKETS_STRETCH_MODE 2026-03-08T22:47:55.421 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1816: wait_for_health_gone: local grepstr=INCORRECT_NUM_BUCKETS_STRETCH_MODE 2026-03-08T22:47:55.421 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1817: wait_for_health_gone: get_timeout_delays 300 .1 2026-03-08T22:47:55.421 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1602: get_timeout_delays: shopt -q -o xtrace 2026-03-08T22:47:55.421 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1602: get_timeout_delays: echo true 2026-03-08T22:47:55.421 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1602: get_timeout_delays: local trace=true 2026-03-08T22:47:55.421 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1603: get_timeout_delays: true 2026-03-08T22:47:55.422 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1603: get_timeout_delays: shopt -u -o xtrace 2026-03-08T22:47:55.577 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1817: wait_for_health_gone: delays=('0.1' '0.2' '0.4' '0.8' '1.6' '3.2' '6.4' '12.8' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '4.5') 2026-03-08T22:47:55.577 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1817: wait_for_health_gone: local -a delays 2026-03-08T22:47:55.577 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1818: wait_for_health_gone: local -i loop=0 2026-03-08T22:47:55.577 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1820: wait_for_health_gone: ceph health detail 2026-03-08T22:47:55.577 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1820: wait_for_health_gone: grep INCORRECT_NUM_BUCKETS_STRETCH_MODE 2026-03-08T22:47:58.869 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:135: TEST_stretched_cluster_uneven_weight: ceph osd crush reweight osd.0 0.07000 2026-03-08T22:48:02.243 INFO:tasks.workunit.client.0.vm00.stderr:reweighted item id 0 name 'osd.0' to 0.07 in crush map 2026-03-08T22:48:02.260 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:137: TEST_stretched_cluster_uneven_weight: wait_for_health UNEVEN_WEIGHTS_STRETCH_MODE 2026-03-08T22:48:02.260 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1838: wait_for_health: local grepstr=UNEVEN_WEIGHTS_STRETCH_MODE 2026-03-08T22:48:02.260 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1839: wait_for_health: get_timeout_delays 300 .1 2026-03-08T22:48:02.260 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1602: get_timeout_delays: shopt -q -o xtrace 2026-03-08T22:48:02.260 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1602: get_timeout_delays: echo true 2026-03-08T22:48:02.261 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1602: get_timeout_delays: local trace=true 2026-03-08T22:48:02.261 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1603: get_timeout_delays: true 2026-03-08T22:48:02.261 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1603: get_timeout_delays: shopt -u -o xtrace 2026-03-08T22:48:02.423 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1839: wait_for_health: delays=('0.1' '0.2' '0.4' '0.8' '1.6' '3.2' '6.4' '12.8' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '4.5') 2026-03-08T22:48:02.423 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1839: wait_for_health: local -a delays 2026-03-08T22:48:02.423 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1840: wait_for_health: local -i loop=0 2026-03-08T22:48:02.423 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1842: wait_for_health: ceph health detail 2026-03-08T22:48:02.423 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1842: wait_for_health: grep UNEVEN_WEIGHTS_STRETCH_MODE 2026-03-08T22:48:02.714 INFO:tasks.workunit.client.0.vm00.stdout:[WRN] UNEVEN_WEIGHTS_STRETCH_MODE: Stretch mode buckets have different weights! 2026-03-08T22:48:02.714 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:139: TEST_stretched_cluster_uneven_weight: ceph osd crush reweight osd.0 0.09000 2026-03-08T22:48:03.012 INFO:tasks.workunit.client.0.vm00.stderr:reweighted item id 0 name 'osd.0' to 0.09 in crush map 2026-03-08T22:48:03.026 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:141: TEST_stretched_cluster_uneven_weight: wait_for_health_gone UNEVEN_WEIGHTS_STRETCH_MODE 2026-03-08T22:48:03.026 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1816: wait_for_health_gone: local grepstr=UNEVEN_WEIGHTS_STRETCH_MODE 2026-03-08T22:48:03.026 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1817: wait_for_health_gone: get_timeout_delays 300 .1 2026-03-08T22:48:03.026 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1602: get_timeout_delays: shopt -q -o xtrace 2026-03-08T22:48:03.026 INFO:tasks.workunit.client.0.vm00.stderr:///home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1602: get_timeout_delays: echo true 2026-03-08T22:48:03.027 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1602: get_timeout_delays: local trace=true 2026-03-08T22:48:03.027 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1603: get_timeout_delays: true 2026-03-08T22:48:03.027 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1603: get_timeout_delays: shopt -u -o xtrace 2026-03-08T22:48:03.185 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1817: wait_for_health_gone: delays=('0.1' '0.2' '0.4' '0.8' '1.6' '3.2' '6.4' '12.8' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '15' '4.5') 2026-03-08T22:48:03.185 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1817: wait_for_health_gone: local -a delays 2026-03-08T22:48:03.185 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1818: wait_for_health_gone: local -i loop=0 2026-03-08T22:48:03.185 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1820: wait_for_health_gone: ceph health detail 2026-03-08T22:48:03.185 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:1820: wait_for_health_gone: grep UNEVEN_WEIGHTS_STRETCH_MODE 2026-03-08T22:48:03.477 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:143: TEST_stretched_cluster_uneven_weight: teardown td/mon-stretched-cluster-uneven-weight 2026-03-08T22:48:03.477 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:164: teardown: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:48:03.477 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:165: teardown: local dumplogs= 2026-03-08T22:48:03.477 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:166: teardown: kill_daemons td/mon-stretched-cluster-uneven-weight KILL 2026-03-08T22:48:03.478 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:48:03.478 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:48:03.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:48:03.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:48:03.478 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:48:03.639 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:48:03.639 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: uname 2026-03-08T22:48:03.639 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: '[' Linux '!=' FreeBSD ']' 2026-03-08T22:48:03.640 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: stat -f -c %T . 2026-03-08T22:48:03.641 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: '[' xfs == btrfs ']' 2026-03-08T22:48:03.641 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:171: teardown: local cores=no 2026-03-08T22:48:03.641 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: sysctl -n kernel.core_pattern 2026-03-08T22:48:03.641 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: local pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:48:03.641 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:174: teardown: '[' / = '|' ']' 2026-03-08T22:48:03.642 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: grep -q '^core\|core$' 2026-03-08T22:48:03.642 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: dirname /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:48:03.643 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: ls /home/ubuntu/cephtest/archive/coredump 2026-03-08T22:48:03.643 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:189: teardown: '[' no = yes -o '' = 1 ']' 2026-03-08T22:48:03.643 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:198: teardown: rm -fr td/mon-stretched-cluster-uneven-weight 2026-03-08T22:48:03.684 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: get_asok_dir 2026-03-08T22:48:03.684 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:48:03.684 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:48:03.684 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: rm -rf /tmp/ceph-asok.68828 2026-03-08T22:48:03.689 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:200: teardown: '[' no = yes ']' 2026-03-08T22:48:03.689 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:207: teardown: return 0 2026-03-08T22:48:03.689 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/mon-stretch/mon-stretch-uneven-crush-weights.sh:23: run: teardown td/mon-stretched-cluster-uneven-weight 2026-03-08T22:48:03.689 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:164: teardown: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:48:03.689 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:165: teardown: local dumplogs= 2026-03-08T22:48:03.689 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:166: teardown: kill_daemons td/mon-stretched-cluster-uneven-weight KILL 2026-03-08T22:48:03.689 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:48:03.689 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:48:03.689 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:48:03.689 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:48:03.689 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:48:03.691 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:48:03.691 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: uname 2026-03-08T22:48:03.692 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: '[' Linux '!=' FreeBSD ']' 2026-03-08T22:48:03.692 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: stat -f -c %T . 2026-03-08T22:48:03.693 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: '[' xfs == btrfs ']' 2026-03-08T22:48:03.693 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:171: teardown: local cores=no 2026-03-08T22:48:03.693 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: sysctl -n kernel.core_pattern 2026-03-08T22:48:03.694 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: local pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:48:03.694 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:174: teardown: '[' / = '|' ']' 2026-03-08T22:48:03.694 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: grep -q '^core\|core$' 2026-03-08T22:48:03.695 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: dirname /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:48:03.695 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: ls /home/ubuntu/cephtest/archive/coredump 2026-03-08T22:48:03.696 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:189: teardown: '[' no = yes -o '' = 1 ']' 2026-03-08T22:48:03.696 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:198: teardown: rm -fr td/mon-stretched-cluster-uneven-weight 2026-03-08T22:48:03.697 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: get_asok_dir 2026-03-08T22:48:03.697 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:48:03.697 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:48:03.698 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: rm -rf /tmp/ceph-asok.68828 2026-03-08T22:48:03.699 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:200: teardown: '[' no = yes ']' 2026-03-08T22:48:03.699 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:207: teardown: return 0 2026-03-08T22:48:03.699 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2377: main: code=0 2026-03-08T22:48:03.699 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2381: main: teardown td/mon-stretched-cluster-uneven-weight 0 2026-03-08T22:48:03.699 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:164: teardown: local dir=td/mon-stretched-cluster-uneven-weight 2026-03-08T22:48:03.699 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:165: teardown: local dumplogs=0 2026-03-08T22:48:03.699 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:166: teardown: kill_daemons td/mon-stretched-cluster-uneven-weight KILL 2026-03-08T22:48:03.699 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: shopt -q -o xtrace 2026-03-08T22:48:03.699 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: echo true 2026-03-08T22:48:03.699 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:345: kill_daemons: local trace=true 2026-03-08T22:48:03.699 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: true 2026-03-08T22:48:03.699 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:346: kill_daemons: shopt -u -o xtrace 2026-03-08T22:48:03.700 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:362: kill_daemons: return 0 2026-03-08T22:48:03.701 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: uname 2026-03-08T22:48:03.701 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:167: teardown: '[' Linux '!=' FreeBSD ']' 2026-03-08T22:48:03.702 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: stat -f -c %T . 2026-03-08T22:48:03.703 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:168: teardown: '[' xfs == btrfs ']' 2026-03-08T22:48:03.703 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:171: teardown: local cores=no 2026-03-08T22:48:03.703 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: sysctl -n kernel.core_pattern 2026-03-08T22:48:03.704 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:172: teardown: local pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:48:03.704 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:174: teardown: '[' / = '|' ']' 2026-03-08T22:48:03.704 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: grep -q '^core\|core$' 2026-03-08T22:48:03.704 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: dirname /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-08T22:48:03.705 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:180: teardown: ls /home/ubuntu/cephtest/archive/coredump 2026-03-08T22:48:03.706 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:189: teardown: '[' no = yes -o 0 = 1 ']' 2026-03-08T22:48:03.706 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:198: teardown: rm -fr td/mon-stretched-cluster-uneven-weight 2026-03-08T22:48:03.707 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: get_asok_dir 2026-03-08T22:48:03.707 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:108: get_asok_dir: '[' -n '' ']' 2026-03-08T22:48:03.707 INFO:tasks.workunit.client.0.vm00.stderr://home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:111: get_asok_dir: echo /tmp/ceph-asok.68828 2026-03-08T22:48:03.707 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:199: teardown: rm -rf /tmp/ceph-asok.68828 2026-03-08T22:48:03.708 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:200: teardown: '[' no = yes ']' 2026-03-08T22:48:03.708 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:207: teardown: return 0 2026-03-08T22:48:03.708 INFO:tasks.workunit.client.0.vm00.stderr:/home/ubuntu/cephtest/clone.client.0/qa/standalone/ceph-helpers.sh:2382: main: return 0 2026-03-08T22:48:03.708 INFO:teuthology.orchestra.run:Running command with timeout 3600 2026-03-08T22:48:03.708 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-08T22:48:03.774 INFO:tasks.workunit:Stopping ['mon-stretch'] on client.0... 2026-03-08T22:48:03.774 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-08T22:48:04.179 DEBUG:teuthology.parallel:result is None 2026-03-08T22:48:04.180 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0 2026-03-08T22:48:04.203 INFO:tasks.workunit:Deleted dir /home/ubuntu/cephtest/mnt.0/client.0 2026-03-08T22:48:04.203 DEBUG:teuthology.orchestra.run.vm00:> rmdir -- /home/ubuntu/cephtest/mnt.0 2026-03-08T22:48:04.259 INFO:tasks.workunit:Deleted artificial mount point /home/ubuntu/cephtest/mnt.0/client.0 2026-03-08T22:48:04.259 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-08T22:48:04.261 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-08T22:48:04.261 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-08T22:48:04.330 INFO:teuthology.task.install.rpm:Removing packages: ceph-radosgw, ceph-test, ceph, ceph-base, cephadm, ceph-immutable-object-cache, ceph-mgr, ceph-mgr-dashboard, ceph-mgr-diskprediction-local, ceph-mgr-rook, ceph-mgr-cephadm, ceph-fuse, ceph-volume, librados-devel, libcephfs2, libcephfs-devel, librados2, librbd1, python3-rados, python3-rgw, python3-cephfs, python3-rbd, rbd-fuse, rbd-mirror, rbd-nbd on rpm system. 2026-03-08T22:48:04.331 DEBUG:teuthology.orchestra.run.vm00:> 2026-03-08T22:48:04.331 DEBUG:teuthology.orchestra.run.vm00:> for d in ceph-radosgw ceph-test ceph ceph-base cephadm ceph-immutable-object-cache ceph-mgr ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-rook ceph-mgr-cephadm ceph-fuse ceph-volume librados-devel libcephfs2 libcephfs-devel librados2 librbd1 python3-rados python3-rgw python3-cephfs python3-rbd rbd-fuse rbd-mirror rbd-nbd ; do 2026-03-08T22:48:04.331 DEBUG:teuthology.orchestra.run.vm00:> sudo yum -y remove $d || true 2026-03-08T22:48:04.331 DEBUG:teuthology.orchestra.run.vm00:> done 2026-03-08T22:48:04.528 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 39 M 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout: mailcap noarch 2.1.49-5.el9 @baseos 78 k 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout:Remove 2 Packages 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 39 M 2026-03-08T22:48:04.529 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-08T22:48:04.531 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-08T22:48:04.531 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-08T22:48:04.544 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-08T22:48:04.544 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-08T22:48:04.613 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-08T22:48:04.633 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-08T22:48:04.633 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:48:04.633 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-radosgw@*.service" escaped as "ceph-radosgw@\x2a.service". 2026-03-08T22:48:04.633 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-radosgw.target". 2026-03-08T22:48:04.633 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-radosgw.target". 2026-03-08T22:48:04.633 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:04.637 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-08T22:48:04.645 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-08T22:48:04.659 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-08T22:48:04.723 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: mailcap-2.1.49-5.el9.noarch 2/2 2026-03-08T22:48:04.724 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-08T22:48:04.778 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : mailcap-2.1.49-5.el9.noarch 2/2 2026-03-08T22:48:04.779 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:04.779 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-08T22:48:04.779 INFO:teuthology.orchestra.run.vm00.stdout: ceph-radosgw-2:19.2.3-678.ge911bdeb.el9.x86_64 mailcap-2.1.49-5.el9.noarch 2026-03-08T22:48:04.779 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:04.779 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:04.974 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:04.974 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:04.974 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-08T22:48:04.974 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:04.974 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-08T22:48:04.974 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 210 M 2026-03-08T22:48:04.974 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-08T22:48:04.974 INFO:teuthology.orchestra.run.vm00.stdout: libxslt x86_64 1.1.34-12.el9 @appstream 743 k 2026-03-08T22:48:04.974 INFO:teuthology.orchestra.run.vm00.stdout: socat x86_64 1.7.4.1-8.el9 @appstream 1.1 M 2026-03-08T22:48:04.974 INFO:teuthology.orchestra.run.vm00.stdout: xmlstarlet x86_64 1.6.1-20.el9 @appstream 195 k 2026-03-08T22:48:04.974 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:04.975 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-08T22:48:04.975 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:04.975 INFO:teuthology.orchestra.run.vm00.stdout:Remove 4 Packages 2026-03-08T22:48:04.975 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:04.975 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 212 M 2026-03-08T22:48:04.975 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-08T22:48:04.977 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-08T22:48:04.977 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-08T22:48:04.999 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-08T22:48:04.999 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-08T22:48:05.062 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-08T22:48:05.070 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-08T22:48:05.072 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : xmlstarlet-1.6.1-20.el9.x86_64 2/4 2026-03-08T22:48:05.075 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libxslt-1.1.34-12.el9.x86_64 3/4 2026-03-08T22:48:05.091 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-08T22:48:05.156 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: socat-1.7.4.1-8.el9.x86_64 4/4 2026-03-08T22:48:05.156 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 1/4 2026-03-08T22:48:05.156 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libxslt-1.1.34-12.el9.x86_64 2/4 2026-03-08T22:48:05.156 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : socat-1.7.4.1-8.el9.x86_64 3/4 2026-03-08T22:48:05.200 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : xmlstarlet-1.6.1-20.el9.x86_64 4/4 2026-03-08T22:48:05.200 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:05.200 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-08T22:48:05.200 INFO:teuthology.orchestra.run.vm00.stdout: ceph-test-2:19.2.3-678.ge911bdeb.el9.x86_64 libxslt-1.1.34-12.el9.x86_64 2026-03-08T22:48:05.200 INFO:teuthology.orchestra.run.vm00.stdout: socat-1.7.4.1-8.el9.x86_64 xmlstarlet-1.6.1-20.el9.x86_64 2026-03-08T22:48:05.200 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:05.200 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:05.409 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout: ceph x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 0 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 7.5 M 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 18 M 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout: lua x86_64 5.4.4-4.el9 @appstream 593 k 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel x86_64 5.4.4-4.el9 @crb 49 k 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout: luarocks noarch 3.9.2-5.el9 @epel 692 k 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout: unzip x86_64 6.0-59.el9 @baseos 389 k 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout: zip x86_64 3.0-35.el9 @baseos 724 k 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout:Remove 8 Packages 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 28 M 2026-03-08T22:48:05.410 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-08T22:48:05.412 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-08T22:48:05.412 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-08T22:48:05.436 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-08T22:48:05.436 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-08T22:48:05.482 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-08T22:48:05.493 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-08T22:48:05.497 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : luarocks-3.9.2-5.el9.noarch 2/8 2026-03-08T22:48:05.506 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : lua-devel-5.4.4-4.el9.x86_64 3/8 2026-03-08T22:48:05.514 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : zip-3.0-35.el9.x86_64 4/8 2026-03-08T22:48:05.522 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : unzip-6.0-59.el9.x86_64 5/8 2026-03-08T22:48:05.526 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : lua-5.4.4-4.el9.x86_64 6/8 2026-03-08T22:48:05.551 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-08T22:48:05.551 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:48:05.551 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mds@*.service" escaped as "ceph-mds@\x2a.service". 2026-03-08T22:48:05.551 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mds.target". 2026-03-08T22:48:05.551 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mds.target". 2026-03-08T22:48:05.551 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:05.552 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-08T22:48:05.560 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 7/8 2026-03-08T22:48:05.581 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-08T22:48:05.581 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:48:05.581 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mon@*.service" escaped as "ceph-mon@\x2a.service". 2026-03-08T22:48:05.581 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mon.target". 2026-03-08T22:48:05.581 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mon.target". 2026-03-08T22:48:05.581 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:05.582 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-08T22:48:05.667 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 8/8 2026-03-08T22:48:05.667 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 1/8 2026-03-08T22:48:05.667 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2/8 2026-03-08T22:48:05.667 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 3/8 2026-03-08T22:48:05.667 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-5.4.4-4.el9.x86_64 4/8 2026-03-08T22:48:05.667 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lua-devel-5.4.4-4.el9.x86_64 5/8 2026-03-08T22:48:05.667 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : luarocks-3.9.2-5.el9.noarch 6/8 2026-03-08T22:48:05.667 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : unzip-6.0-59.el9.x86_64 7/8 2026-03-08T22:48:05.741 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : zip-3.0-35.el9.x86_64 8/8 2026-03-08T22:48:05.741 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:05.741 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-08T22:48:05.741 INFO:teuthology.orchestra.run.vm00.stdout: ceph-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:05.741 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mds-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:05.741 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mon-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:05.741 INFO:teuthology.orchestra.run.vm00.stdout: lua-5.4.4-4.el9.x86_64 2026-03-08T22:48:05.741 INFO:teuthology.orchestra.run.vm00.stdout: lua-devel-5.4.4-4.el9.x86_64 2026-03-08T22:48:05.741 INFO:teuthology.orchestra.run.vm00.stdout: luarocks-3.9.2-5.el9.noarch 2026-03-08T22:48:05.741 INFO:teuthology.orchestra.run.vm00.stdout: unzip-6.0-59.el9.x86_64 2026-03-08T22:48:05.741 INFO:teuthology.orchestra.run.vm00.stdout: zip-3.0-35.el9.x86_64 2026-03-08T22:48:05.741 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:05.741 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:05.938 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:05.943 INFO:teuthology.orchestra.run.vm00.stdout:=========================================================================================== 2026-03-08T22:48:05.943 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-08T22:48:05.943 INFO:teuthology.orchestra.run.vm00.stdout:=========================================================================================== 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 23 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout:Removing dependent packages: 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 431 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.4 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 806 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 88 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 66 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 563 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 59 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.4 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp x86_64 20211102.0-4.el9 @epel 1.9 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 85 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 628 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 1.5 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 52 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 138 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup x86_64 2.8.1-3.el9 @baseos 770 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas x86_64 3.0.4-9.el9 @appstream 68 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib x86_64 3.0.4-9.el9 @appstream 11 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp x86_64 3.0.4-9.el9 @appstream 39 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs x86_64 2.9.1-3.el9 @epel 1.4 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data noarch 1.46.7-10.el9 @epel 13 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs x86_64 1.1.0-3.el9 @baseos 80 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 425 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: libconfig x86_64 1.7.2-9.el9 @baseos 220 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran x86_64 11.5.0-14.el9 @baseos 2.8 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: liboath x86_64 2.6.12-1.el9 @epel 94 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath x86_64 11.5.0-14.el9 @baseos 330 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.6 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt x86_64 1.10.1-1.el9 @appstream 685 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: libunwind x86_64 1.6.2-1.el9 @epel 170 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: openblas x86_64 0.3.29-1.el9 @appstream 112 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp x86_64 0.3.29-1.el9 @appstream 46 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: pciutils x86_64 3.7.0-7.el9 @baseos 216 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: protobuf x86_64 3.14.0-17.el9 @appstream 3.5 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler x86_64 3.14.0-17.el9 @crb 2.9 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh noarch 2.13.2-5.el9 @epel 3.9 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand noarch 2.2.2-8.el9 @epel 82 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel noarch 2.9.1-2.el9 @appstream 27 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile noarch 1.2.0-1.el9 @epel 254 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt x86_64 3.2.2-1.el9 @epel 87 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools noarch 4.2.4-1.el9 @epel 93 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 702 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi noarch 2023.05.07-4.el9 @epel 6.3 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi x86_64 1.14.5-5.el9 @baseos 1.0 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-chardet noarch 4.0.0-5.el9 @anaconda 1.4 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot noarch 10.0.1-4.el9 @epel 682 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy noarch 18.6.1-2.el9 @epel 1.1 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography x86_64 36.0.1-5.el9 @baseos 4.5 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel x86_64 3.9.25-3.el9 @appstream 765 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth noarch 1:2.45.0-1.el9 @epel 1.4 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio x86_64 1.46.7-10.el9 @epel 6.7 M 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools x86_64 1.46.7-10.el9 @epel 418 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-idna noarch 2.10-7.el9.1 @anaconda 513 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco noarch 8.2.1-3.el9 @epel 3.7 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes noarch 3.2.1-5.el9 @epel 24 k 2026-03-08T22:48:05.944 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections noarch 3.0.0-8.el9 @epel 55 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context noarch 6.0.1-3.el9 @epel 31 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools noarch 3.5.0-2.el9 @epel 33 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text noarch 4.0.0-2.el9 @epel 51 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2 noarch 2.11.3-8.el9 @appstream 1.1 M 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-jsonpatch noarch 1.21-16.el9 @koji-override-0 55 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-jsonpointer noarch 2.0-4.el9 @koji-override-0 34 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes noarch 1:26.1.0-3.el9 @epel 21 M 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt x86_64 1.10.1-1.el9 @appstream 832 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-logutils noarch 0.3.5-21.el9 @epel 126 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako noarch 1.1.4-6.el9 @appstream 534 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe x86_64 1.1.1-12.el9 @appstream 60 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools noarch 8.12.0-2.el9 @epel 378 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort noarch 7.1.1-5.el9 @epel 215 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy x86_64 1:1.23.5-2.el9 @appstream 30 M 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py x86_64 1:1.23.5-2.el9 @appstream 1.7 M 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-oauthlib noarch 3.1.1-5.el9 @koji-override-0 888 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging noarch 20.9-5.el9 @appstream 248 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan noarch 1.4.2-3.el9 @epel 1.3 M 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply noarch 3.11-14.el9 @baseos 430 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend noarch 3.1.0-2.el9 @epel 20 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable noarch 0.7.2-27.el9 @koji-override-0 166 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf noarch 3.14.0-17.el9 @appstream 1.4 M 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL noarch 21.0.0-1.el9 @epel 389 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1 noarch 0.4.8-7.el9 @appstream 622 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules noarch 0.4.8-7.el9 @appstream 1.0 M 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser noarch 2.20-6.el9 @baseos 745 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing noarch 2.4.7-9.el9 @baseos 635 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pysocks noarch 1.7.1-12.el9 @anaconda 88 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-pytz noarch 2021.1-5.el9 @koji-override-0 176 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru noarch 0.7-16.el9 @epel 83 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests noarch 2.25.1-10.el9 @baseos 405 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib noarch 1.3.0-12.el9 @appstream 119 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes noarch 2.5.1-5.el9 @epel 459 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa noarch 4.9-2.el9 @epel 202 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy x86_64 1.9.3-2.el9 @appstream 76 M 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora noarch 5.0.0-2.el9 @epel 96 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml noarch 0.10.2-6.el9 @appstream 99 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions noarch 4.15.0-1.el9 @epel 447 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3 noarch 1.26.5-7.el9 @baseos 746 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob noarch 1.8.8-2.el9 @epel 1.2 M 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client noarch 1.2.3-2.el9 @epel 319 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-werkzeug noarch 2.0.3-3.el9.1 @epel 1.9 M 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile noarch 2.0-10.el9 @epel 35 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: qatlib x86_64 25.08.0-2.el9 @appstream 639 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service x86_64 25.08.0-2.el9 @appstream 69 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs x86_64 1.3.1-1.el9 @appstream 148 k 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout:=========================================================================================== 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout:Remove 103 Packages 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 613 M 2026-03-08T22:48:05.945 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-08T22:48:05.974 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-08T22:48:05.974 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-08T22:48:06.074 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-08T22:48:06.074 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-08T22:48:06.257 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-08T22:48:06.257 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/103 2026-03-08T22:48:06.264 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 1/103 2026-03-08T22:48:06.281 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/103 2026-03-08T22:48:06.281 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:48:06.281 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-mgr@*.service" escaped as "ceph-mgr@\x2a.service". 2026-03-08T22:48:06.281 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-mgr.target". 2026-03-08T22:48:06.281 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-mgr.target". 2026-03-08T22:48:06.281 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:06.281 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/103 2026-03-08T22:48:06.295 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2/103 2026-03-08T22:48:06.318 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 3/103 2026-03-08T22:48:06.318 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/103 2026-03-08T22:48:06.370 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 4/103 2026-03-08T22:48:06.379 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-kubernetes-1:26.1.0-3.el9.noarch 5/103 2026-03-08T22:48:06.383 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-requests-oauthlib-1.3.0-12.el9.noarch 6/103 2026-03-08T22:48:06.383 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/103 2026-03-08T22:48:06.393 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/103 2026-03-08T22:48:06.399 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cherrypy-18.6.1-2.el9.noarch 8/103 2026-03-08T22:48:06.403 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cheroot-10.0.1-4.el9.noarch 9/103 2026-03-08T22:48:06.411 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-grpcio-tools-1.46.7-10.el9.x86_64 10/103 2026-03-08T22:48:06.414 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-grpcio-1.46.7-10.el9.x86_64 11/103 2026-03-08T22:48:06.434 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/103 2026-03-08T22:48:06.434 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:48:06.434 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-osd@*.service" escaped as "ceph-osd@\x2a.service". 2026-03-08T22:48:06.434 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-osd.target". 2026-03-08T22:48:06.434 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-osd.target". 2026-03-08T22:48:06.434 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:06.439 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/103 2026-03-08T22:48:06.447 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/103 2026-03-08T22:48:06.461 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/103 2026-03-08T22:48:06.461 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:48:06.461 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-volume@*.service" escaped as "ceph-volume@\x2a.service". 2026-03-08T22:48:06.461 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:06.468 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/103 2026-03-08T22:48:06.476 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 13/103 2026-03-08T22:48:06.478 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-collections-3.0.0-8.el9.noarch 14/103 2026-03-08T22:48:06.483 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-text-4.0.0-2.el9.noarch 15/103 2026-03-08T22:48:06.487 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jinja2-2.11.3-8.el9.noarch 16/103 2026-03-08T22:48:06.495 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-requests-2.25.1-10.el9.noarch 17/103 2026-03-08T22:48:06.506 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-google-auth-1:2.45.0-1.el9.noarch 18/103 2026-03-08T22:48:06.511 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pecan-1.4.2-3.el9.noarch 19/103 2026-03-08T22:48:06.520 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-rsa-4.9-2.el9.noarch 20/103 2026-03-08T22:48:06.526 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyasn1-modules-0.4.8-7.el9.noarch 21/103 2026-03-08T22:48:06.553 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-urllib3-1.26.5-7.el9.noarch 22/103 2026-03-08T22:48:06.560 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-babel-2.9.1-2.el9.noarch 23/103 2026-03-08T22:48:06.562 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-classes-3.2.1-5.el9.noarch 24/103 2026-03-08T22:48:06.571 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyOpenSSL-21.0.0-1.el9.noarch 25/103 2026-03-08T22:48:06.581 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-asyncssh-2.13.2-5.el9.noarch 26/103 2026-03-08T22:48:06.581 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/103 2026-03-08T22:48:06.587 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 27/103 2026-03-08T22:48:06.678 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jsonpatch-1.21-16.el9.noarch 28/103 2026-03-08T22:48:06.695 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-scipy-1.9.3-2.el9.x86_64 29/103 2026-03-08T22:48:06.707 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/103 2026-03-08T22:48:06.707 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/libstoragemgmt.service". 2026-03-08T22:48:06.707 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:06.708 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libstoragemgmt-1.10.1-1.el9.x86_64 30/103 2026-03-08T22:48:06.733 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libstoragemgmt-1.10.1-1.el9.x86_64 30/103 2026-03-08T22:48:06.751 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 31/103 2026-03-08T22:48:06.756 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cryptography-36.0.1-5.el9.x86_64 32/103 2026-03-08T22:48:06.758 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : protobuf-compiler-3.14.0-17.el9.x86_64 33/103 2026-03-08T22:48:06.761 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-bcrypt-3.2.2-1.el9.x86_64 34/103 2026-03-08T22:48:06.783 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/103 2026-03-08T22:48:06.783 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:48:06.783 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-rbd-mirror@*.service" escaped as "ceph-rbd-mirror@\x2a.service". 2026-03-08T22:48:06.783 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph-rbd-mirror.target". 2026-03-08T22:48:06.783 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-rbd-mirror.target". 2026-03-08T22:48:06.783 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:06.785 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/103 2026-03-08T22:48:06.796 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 35/103 2026-03-08T22:48:06.801 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-mako-1.1.4-6.el9.noarch 36/103 2026-03-08T22:48:06.804 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-context-6.0.1-3.el9.noarch 37/103 2026-03-08T22:48:06.806 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-packaging-20.9-5.el9.noarch 38/103 2026-03-08T22:48:06.809 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-portend-3.1.0-2.el9.noarch 39/103 2026-03-08T22:48:06.811 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-tempora-5.0.0-2.el9.noarch 40/103 2026-03-08T22:48:06.815 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-functools-3.5.0-2.el9.noarch 41/103 2026-03-08T22:48:06.819 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-routes-2.5.1-5.el9.noarch 42/103 2026-03-08T22:48:06.823 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cffi-1.14.5-5.el9.x86_64 43/103 2026-03-08T22:48:06.868 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pycparser-2.20-6.el9.noarch 44/103 2026-03-08T22:48:06.881 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-numpy-1:1.23.5-2.el9.x86_64 45/103 2026-03-08T22:48:06.884 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : flexiblas-netlib-3.0.4-9.el9.x86_64 46/103 2026-03-08T22:48:06.889 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 47/103 2026-03-08T22:48:06.891 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : openblas-openmp-0.3.29-1.el9.x86_64 48/103 2026-03-08T22:48:06.894 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libgfortran-11.5.0-14.el9.x86_64 49/103 2026-03-08T22:48:06.896 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-libstoragemgmt-1.10.1-1.el9.x86_64 50/103 2026-03-08T22:48:06.916 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 51/103 2026-03-08T22:48:06.916 INFO:teuthology.orchestra.run.vm00.stdout:Glob pattern passed to enable, but globs are not supported for this. 2026-03-08T22:48:06.916 INFO:teuthology.orchestra.run.vm00.stdout:Invalid unit name "ceph-immutable-object-cache@*.service" escaped as "ceph-immutable-object-cache@\x2a.service". 2026-03-08T22:48:06.916 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:06.916 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 51/103 2026-03-08T22:48:06.923 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-immutable-object-cache-2:19.2.3-678.ge911bd 51/103 2026-03-08T22:48:06.925 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : openblas-0.3.29-1.el9.x86_64 52/103 2026-03-08T22:48:06.927 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : flexiblas-3.0.4-9.el9.x86_64 53/103 2026-03-08T22:48:06.930 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-ply-3.11-14.el9.noarch 54/103 2026-03-08T22:48:06.932 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-repoze-lru-0.7-16.el9.noarch 55/103 2026-03-08T22:48:06.934 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jaraco-8.2.1-3.el9.noarch 56/103 2026-03-08T22:48:06.937 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-more-itertools-8.12.0-2.el9.noarch 57/103 2026-03-08T22:48:06.939 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-toml-0.10.2-6.el9.noarch 58/103 2026-03-08T22:48:06.942 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pytz-2021.1-5.el9.noarch 59/103 2026-03-08T22:48:06.944 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyparsing-2.4.7-9.el9.noarch 60/103 2026-03-08T22:48:06.952 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-backports-tarfile-1.2.0-1.el9.noarch 61/103 2026-03-08T22:48:06.956 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-devel-3.9.25-3.el9.x86_64 62/103 2026-03-08T22:48:06.958 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-jsonpointer-2.0-4.el9.noarch 63/103 2026-03-08T22:48:06.961 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-typing-extensions-4.15.0-1.el9.noarch 64/103 2026-03-08T22:48:06.963 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-idna-2.10-7.el9.1.noarch 65/103 2026-03-08T22:48:06.969 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pysocks-1.7.1-12.el9.noarch 66/103 2026-03-08T22:48:06.973 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-pyasn1-0.4.8-7.el9.noarch 67/103 2026-03-08T22:48:06.978 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-logutils-0.3.5-21.el9.noarch 68/103 2026-03-08T22:48:06.982 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-webob-1.8.8-2.el9.noarch 69/103 2026-03-08T22:48:06.987 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cachetools-4.2.4-1.el9.noarch 70/103 2026-03-08T22:48:06.990 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-chardet-4.0.0-5.el9.noarch 71/103 2026-03-08T22:48:06.993 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-autocommand-2.2.2-8.el9.noarch 72/103 2026-03-08T22:48:06.998 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : grpc-data-1.46.7-10.el9.noarch 73/103 2026-03-08T22:48:07.002 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-protobuf-3.14.0-17.el9.noarch 74/103 2026-03-08T22:48:07.006 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-zc-lockfile-2.0-10.el9.noarch 75/103 2026-03-08T22:48:07.014 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-natsort-7.1.1-5.el9.noarch 76/103 2026-03-08T22:48:07.019 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-oauthlib-3.1.1-5.el9.noarch 77/103 2026-03-08T22:48:07.022 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-websocket-client-1.2.3-2.el9.noarch 78/103 2026-03-08T22:48:07.024 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-certifi-2023.05.07-4.el9.noarch 79/103 2026-03-08T22:48:07.026 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 80/103 2026-03-08T22:48:07.032 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 81/103 2026-03-08T22:48:07.036 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-werkzeug-2.0.3-3.el9.1.noarch 82/103 2026-03-08T22:48:07.056 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 83/103 2026-03-08T22:48:07.056 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/multi-user.target.wants/ceph.target". 2026-03-08T22:48:07.056 INFO:teuthology.orchestra.run.vm00.stdout:Removed "/etc/systemd/system/ceph.target.wants/ceph-crash.service". 2026-03-08T22:48:07.056 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:07.063 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 83/103 2026-03-08T22:48:07.092 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 83/103 2026-03-08T22:48:07.092 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 84/103 2026-03-08T22:48:07.102 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 84/103 2026-03-08T22:48:07.107 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : qatzip-libs-1.3.1-1.el9.x86_64 85/103 2026-03-08T22:48:07.110 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 86/103 2026-03-08T22:48:07.112 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-prettytable-0.7.2-27.el9.noarch 87/103 2026-03-08T22:48:07.112 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 88/103 2026-03-08T22:48:12.390 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 88/103 2026-03-08T22:48:12.390 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /sys 2026-03-08T22:48:12.390 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /proc 2026-03-08T22:48:12.390 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /mnt 2026-03-08T22:48:12.390 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /var/tmp 2026-03-08T22:48:12.390 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /home 2026-03-08T22:48:12.390 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /root 2026-03-08T22:48:12.390 INFO:teuthology.orchestra.run.vm00.stdout:skipping the directory /tmp 2026-03-08T22:48:12.391 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:12.401 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : qatlib-25.08.0-2.el9.x86_64 89/103 2026-03-08T22:48:12.419 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 90/103 2026-03-08T22:48:12.419 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : qatlib-service-25.08.0-2.el9.x86_64 90/103 2026-03-08T22:48:12.426 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: qatlib-service-25.08.0-2.el9.x86_64 90/103 2026-03-08T22:48:12.429 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : gperftools-libs-2.9.1-3.el9.x86_64 91/103 2026-03-08T22:48:12.431 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libunwind-1.6.2-1.el9.x86_64 92/103 2026-03-08T22:48:12.434 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : pciutils-3.7.0-7.el9.x86_64 93/103 2026-03-08T22:48:12.436 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : liboath-2.6.12-1.el9.x86_64 94/103 2026-03-08T22:48:12.436 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 95/103 2026-03-08T22:48:12.450 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 95/103 2026-03-08T22:48:12.452 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ledmon-libs-1.1.0-3.el9.x86_64 96/103 2026-03-08T22:48:12.455 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libquadmath-11.5.0-14.el9.x86_64 97/103 2026-03-08T22:48:12.458 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-markupsafe-1.1.1-12.el9.x86_64 98/103 2026-03-08T22:48:12.461 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : protobuf-3.14.0-17.el9.x86_64 99/103 2026-03-08T22:48:12.467 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libconfig-1.7.2-9.el9.x86_64 100/103 2026-03-08T22:48:12.475 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : cryptsetup-2.8.1-3.el9.x86_64 101/103 2026-03-08T22:48:12.479 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : abseil-cpp-20211102.0-4.el9.x86_64 102/103 2026-03-08T22:48:12.479 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 103/103 2026-03-08T22:48:12.584 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 103/103 2026-03-08T22:48:12.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : abseil-cpp-20211102.0-4.el9.x86_64 1/103 2026-03-08T22:48:12.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2/103 2026-03-08T22:48:12.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 3/103 2026-03-08T22:48:12.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.e 4/103 2026-03-08T22:48:12.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-immutable-object-cache-2:19.2.3-678.ge911bd 5/103 2026-03-08T22:48:12.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 6/103 2026-03-08T22:48:12.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noar 7/103 2026-03-08T22:48:12.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.no 8/103 2026-03-08T22:48:12.584 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-diskprediction-local-2:19.2.3-678.ge911 9/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9 10/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 11/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 12/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el 13/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 14/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 15/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cryptsetup-2.8.1-3.el9.x86_64 16/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-3.0.4-9.el9.x86_64 17/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-netlib-3.0.4-9.el9.x86_64 18/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 19/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : gperftools-libs-2.9.1-3.el9.x86_64 20/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : grpc-data-1.46.7-10.el9.noarch 21/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ledmon-libs-1.1.0-3.el9.x86_64 22/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 23/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libconfig-1.7.2-9.el9.x86_64 24/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libgfortran-11.5.0-14.el9.x86_64 25/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : liboath-2.6.12-1.el9.x86_64 26/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libquadmath-11.5.0-14.el9.x86_64 27/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_ 28/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libstoragemgmt-1.10.1-1.el9.x86_64 29/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libunwind-1.6.2-1.el9.x86_64 30/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-0.3.29-1.el9.x86_64 31/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : openblas-openmp-0.3.29-1.el9.x86_64 32/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : pciutils-3.7.0-7.el9.x86_64 33/103 2026-03-08T22:48:12.585 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-3.14.0-17.el9.x86_64 34/103 2026-03-08T22:48:12.586 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : protobuf-compiler-3.14.0-17.el9.x86_64 35/103 2026-03-08T22:48:12.586 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-asyncssh-2.13.2-5.el9.noarch 36/103 2026-03-08T22:48:12.586 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-autocommand-2.2.2-8.el9.noarch 37/103 2026-03-08T22:48:12.586 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-babel-2.9.1-2.el9.noarch 38/103 2026-03-08T22:48:12.586 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-backports-tarfile-1.2.0-1.el9.noarch 39/103 2026-03-08T22:48:12.586 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-bcrypt-3.2.2-1.el9.x86_64 40/103 2026-03-08T22:48:12.586 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cachetools-4.2.4-1.el9.noarch 41/103 2026-03-08T22:48:12.586 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x 42/103 2026-03-08T22:48:12.586 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-certifi-2023.05.07-4.el9.noarch 43/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cffi-1.14.5-5.el9.x86_64 44/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-chardet-4.0.0-5.el9.noarch 45/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cheroot-10.0.1-4.el9.noarch 46/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cherrypy-18.6.1-2.el9.noarch 47/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cryptography-36.0.1-5.el9.x86_64 48/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-devel-3.9.25-3.el9.x86_64 49/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-google-auth-1:2.45.0-1.el9.noarch 50/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-1.46.7-10.el9.x86_64 51/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-grpcio-tools-1.46.7-10.el9.x86_64 52/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-idna-2.10-7.el9.1.noarch 53/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-8.2.1-3.el9.noarch 54/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-classes-3.2.1-5.el9.noarch 55/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-collections-3.0.0-8.el9.noarch 56/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-context-6.0.1-3.el9.noarch 57/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-functools-3.5.0-2.el9.noarch 58/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jaraco-text-4.0.0-2.el9.noarch 59/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jinja2-2.11.3-8.el9.noarch 60/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jsonpatch-1.21-16.el9.noarch 61/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-jsonpointer-2.0-4.el9.noarch 62/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-kubernetes-1:26.1.0-3.el9.noarch 63/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-libstoragemgmt-1.10.1-1.el9.x86_64 64/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-logutils-0.3.5-21.el9.noarch 65/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-mako-1.1.4-6.el9.noarch 66/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-markupsafe-1.1.1-12.el9.x86_64 67/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-more-itertools-8.12.0-2.el9.noarch 68/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-natsort-7.1.1-5.el9.noarch 69/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-1:1.23.5-2.el9.x86_64 70/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-numpy-f2py-1:1.23.5-2.el9.x86_64 71/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-oauthlib-3.1.1-5.el9.noarch 72/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-packaging-20.9-5.el9.noarch 73/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pecan-1.4.2-3.el9.noarch 74/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ply-3.11-14.el9.noarch 75/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-portend-3.1.0-2.el9.noarch 76/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-prettytable-0.7.2-27.el9.noarch 77/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-protobuf-3.14.0-17.el9.noarch 78/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyOpenSSL-21.0.0-1.el9.noarch 79/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-0.4.8-7.el9.noarch 80/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyasn1-modules-0.4.8-7.el9.noarch 81/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pycparser-2.20-6.el9.noarch 82/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pyparsing-2.4.7-9.el9.noarch 83/103 2026-03-08T22:48:12.587 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pysocks-1.7.1-12.el9.noarch 84/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-pytz-2021.1-5.el9.noarch 85/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-repoze-lru-0.7-16.el9.noarch 86/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-2.25.1-10.el9.noarch 87/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-requests-oauthlib-1.3.0-12.el9.noarch 88/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-routes-2.5.1-5.el9.noarch 89/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rsa-4.9-2.el9.noarch 90/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-scipy-1.9.3-2.el9.x86_64 91/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-tempora-5.0.0-2.el9.noarch 92/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-toml-0.10.2-6.el9.noarch 93/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-typing-extensions-4.15.0-1.el9.noarch 94/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-urllib3-1.26.5-7.el9.noarch 95/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-webob-1.8.8-2.el9.noarch 96/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-websocket-client-1.2.3-2.el9.noarch 97/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-werkzeug-2.0.3-3.el9.1.noarch 98/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-zc-lockfile-2.0-10.el9.noarch 99/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-25.08.0-2.el9.x86_64 100/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatlib-service-25.08.0-2.el9.x86_64 101/103 2026-03-08T22:48:12.588 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qatzip-libs-1.3.1-1.el9.x86_64 102/103 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 103/103 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: abseil-cpp-20211102.0-4.el9.x86_64 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-base-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-grafana-dashboards-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-immutable-object-cache-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-dashboard-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-diskprediction-local-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-modules-core-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-mgr-rook-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-osd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-prometheus-alerts-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:48:12.667 INFO:teuthology.orchestra.run.vm00.stdout: ceph-selinux-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: ceph-volume-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: cryptsetup-2.8.1-3.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-3.0.4-9.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-netlib-3.0.4-9.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: flexiblas-openblas-openmp-3.0.4-9.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: gperftools-libs-2.9.1-3.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: grpc-data-1.46.7-10.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: ledmon-libs-1.1.0-3.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: libcephsqlite-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: libconfig-1.7.2-9.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: libgfortran-11.5.0-14.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: liboath-2.6.12-1.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: libquadmath-11.5.0-14.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: libradosstriper1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: libunwind-1.6.2-1.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: openblas-0.3.29-1.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: openblas-openmp-0.3.29-1.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: pciutils-3.7.0-7.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-3.14.0-17.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: protobuf-compiler-3.14.0-17.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-asyncssh-2.13.2-5.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-autocommand-2.2.2-8.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-babel-2.9.1-2.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-backports-tarfile-1.2.0-1.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-bcrypt-3.2.2-1.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-cachetools-4.2.4-1.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-common-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-certifi-2023.05.07-4.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-cffi-1.14.5-5.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-chardet-4.0.0-5.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-cheroot-10.0.1-4.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-cherrypy-18.6.1-2.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-cryptography-36.0.1-5.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-devel-3.9.25-3.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-google-auth-1:2.45.0-1.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-1.46.7-10.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-grpcio-tools-1.46.7-10.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-idna-2.10-7.el9.1.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-8.2.1-3.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-classes-3.2.1-5.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-collections-3.0.0-8.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-context-6.0.1-3.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-functools-3.5.0-2.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-jaraco-text-4.0.0-2.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-jinja2-2.11.3-8.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-jsonpatch-1.21-16.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-jsonpointer-2.0-4.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-kubernetes-1:26.1.0-3.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-libstoragemgmt-1.10.1-1.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-logutils-0.3.5-21.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-mako-1.1.4-6.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-markupsafe-1.1.1-12.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-more-itertools-8.12.0-2.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-natsort-7.1.1-5.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-1:1.23.5-2.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-numpy-f2py-1:1.23.5-2.el9.x86_64 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-oauthlib-3.1.1-5.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-packaging-20.9-5.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-pecan-1.4.2-3.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-ply-3.11-14.el9.noarch 2026-03-08T22:48:12.668 INFO:teuthology.orchestra.run.vm00.stdout: python3-portend-3.1.0-2.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-prettytable-0.7.2-27.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-protobuf-3.14.0-17.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyOpenSSL-21.0.0-1.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-0.4.8-7.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyasn1-modules-0.4.8-7.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-pycparser-2.20-6.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-pyparsing-2.4.7-9.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-pysocks-1.7.1-12.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-pytz-2021.1-5.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-repoze-lru-0.7-16.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-2.25.1-10.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-requests-oauthlib-1.3.0-12.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-routes-2.5.1-5.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-rsa-4.9-2.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-scipy-1.9.3-2.el9.x86_64 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-tempora-5.0.0-2.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-toml-0.10.2-6.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-typing-extensions-4.15.0-1.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-urllib3-1.26.5-7.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-webob-1.8.8-2.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-websocket-client-1.2.3-2.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-werkzeug-2.0.3-3.el9.1.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: python3-zc-lockfile-2.0-10.el9.noarch 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-25.08.0-2.el9.x86_64 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: qatlib-service-25.08.0-2.el9.x86_64 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: qatzip-libs-1.3.1-1.el9.x86_64 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: rbd-mirror-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:12.669 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:12.870 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:12.871 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:12.871 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-08T22:48:12.871 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:12.871 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-08T22:48:12.871 INFO:teuthology.orchestra.run.vm00.stdout: cephadm noarch 2:19.2.3-678.ge911bdeb.el9 @ceph-noarch 775 k 2026-03-08T22:48:12.871 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:12.871 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-08T22:48:12.871 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:12.871 INFO:teuthology.orchestra.run.vm00.stdout:Remove 1 Package 2026-03-08T22:48:12.871 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:12.871 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 775 k 2026-03-08T22:48:12.871 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-08T22:48:12.873 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-08T22:48:12.873 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-08T22:48:12.874 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-08T22:48:12.874 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-08T22:48:12.890 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-08T22:48:12.890 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-08T22:48:13.013 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-08T22:48:13.066 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 1/1 2026-03-08T22:48:13.066 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:13.066 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-08T22:48:13.066 INFO:teuthology.orchestra.run.vm00.stdout: cephadm-2:19.2.3-678.ge911bdeb.el9.noarch 2026-03-08T22:48:13.066 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:13.066 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:13.275 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-immutable-object-cache 2026-03-08T22:48:13.275 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:13.278 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:13.279 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:13.279 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:13.438 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-mgr 2026-03-08T22:48:13.439 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:13.442 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:13.442 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:13.442 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:13.597 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-mgr-dashboard 2026-03-08T22:48:13.597 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:13.600 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:13.600 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:13.601 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:13.757 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-mgr-diskprediction-local 2026-03-08T22:48:13.757 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:13.760 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:13.761 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:13.761 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:13.914 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-mgr-rook 2026-03-08T22:48:13.914 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:13.917 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:13.917 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:13.917 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:14.067 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-mgr-cephadm 2026-03-08T22:48:14.067 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:14.069 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:14.070 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:14.070 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:14.229 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:14.230 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:14.230 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-08T22:48:14.230 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:14.230 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-08T22:48:14.230 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.6 M 2026-03-08T22:48:14.230 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:14.230 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-08T22:48:14.230 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:14.230 INFO:teuthology.orchestra.run.vm00.stdout:Remove 1 Package 2026-03-08T22:48:14.230 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:14.230 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 3.6 M 2026-03-08T22:48:14.230 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-08T22:48:14.231 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-08T22:48:14.231 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-08T22:48:14.240 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-08T22:48:14.240 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-08T22:48:14.264 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-08T22:48:14.278 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-08T22:48:14.334 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-08T22:48:14.378 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 1/1 2026-03-08T22:48:14.378 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:14.378 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-08T22:48:14.378 INFO:teuthology.orchestra.run.vm00.stdout: ceph-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:14.378 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:14.378 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:14.538 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: ceph-volume 2026-03-08T22:48:14.538 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:14.541 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:14.542 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:14.542 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:14.704 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:14.704 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:14.704 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repo Size 2026-03-08T22:48:14.704 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:14.704 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-08T22:48:14.704 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 456 k 2026-03-08T22:48:14.704 INFO:teuthology.orchestra.run.vm00.stdout:Removing dependent packages: 2026-03-08T22:48:14.704 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 153 k 2026-03-08T22:48:14.704 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:14.704 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-08T22:48:14.704 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:14.704 INFO:teuthology.orchestra.run.vm00.stdout:Remove 2 Packages 2026-03-08T22:48:14.704 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:14.705 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 610 k 2026-03-08T22:48:14.705 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-08T22:48:14.706 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-08T22:48:14.706 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-08T22:48:14.716 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-08T22:48:14.716 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-08T22:48:14.743 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-08T22:48:14.749 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-08T22:48:14.762 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-08T22:48:14.828 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-08T22:48:14.828 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 1/2 2026-03-08T22:48:14.873 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2/2 2026-03-08T22:48:14.873 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:14.873 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-08T22:48:14.873 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:14.873 INFO:teuthology.orchestra.run.vm00.stdout: librados-devel-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:14.873 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:14.873 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:15.043 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repo Size 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 3.0 M 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout:Removing dependent packages: 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 514 k 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 187 k 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout:Remove 3 Packages 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 3.7 M 2026-03-08T22:48:15.044 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-08T22:48:15.046 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-08T22:48:15.046 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-08T22:48:15.061 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-08T22:48:15.061 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-08T22:48:15.097 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-08T22:48:15.102 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-08T22:48:15.107 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-08T22:48:15.107 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-08T22:48:15.172 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-08T22:48:15.173 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 1/3 2026-03-08T22:48:15.173 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86 2/3 2026-03-08T22:48:15.214 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 3/3 2026-03-08T22:48:15.214 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:15.214 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-08T22:48:15.214 INFO:teuthology.orchestra.run.vm00.stdout: libcephfs2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:15.214 INFO:teuthology.orchestra.run.vm00.stdout: python3-ceph-argparse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:15.214 INFO:teuthology.orchestra.run.vm00.stdout: python3-cephfs-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:15.214 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:15.214 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:15.372 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: libcephfs-devel 2026-03-08T22:48:15.372 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:15.375 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:15.375 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:15.376 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:15.534 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout: Package Arch Version Repository Size 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout:Removing: 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout: librados2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 12 M 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout:Removing dependent packages: 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 1.1 M 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 265 k 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout: qemu-kvm-block-rbd x86_64 17:10.1.0-15.el9 @appstream 37 k 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 227 k 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 490 k 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout:Removing unused dependencies: 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options x86_64 1.75.0-13.el9 @appstream 276 k 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout: libarrow x86_64 9.0.0-15.el9 @epel 18 M 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc noarch 9.0.0-15.el9 @epel 122 k 2026-03-08T22:48:15.535 INFO:teuthology.orchestra.run.vm00.stdout: libnbd x86_64 1.20.3-4.el9 @appstream 453 k 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj x86_64 1.12.1-1.el9 @appstream 383 k 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq x86_64 0.11.0-7.el9 @appstream 102 k 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout: librbd1 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 13 M 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka x86_64 1.6.1-102.el9 @appstream 2.0 M 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout: librgw2 x86_64 2:19.2.3-678.ge911bdeb.el9 @ceph 19 M 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust x86_64 2.12.0-6.el9 @appstream 1.0 M 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs x86_64 9.0.0-15.el9 @epel 2.8 M 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout: re2 x86_64 1:20211101-20.el9 @epel 472 k 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout: thrift x86_64 0.15.0-4.el9 @epel 4.8 M 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout:Transaction Summary 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout:================================================================================ 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout:Remove 20 Packages 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout:Freed space: 79 M 2026-03-08T22:48:15.536 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction check 2026-03-08T22:48:15.539 INFO:teuthology.orchestra.run.vm00.stdout:Transaction check succeeded. 2026-03-08T22:48:15.539 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction test 2026-03-08T22:48:15.560 INFO:teuthology.orchestra.run.vm00.stdout:Transaction test succeeded. 2026-03-08T22:48:15.560 INFO:teuthology.orchestra.run.vm00.stdout:Running transaction 2026-03-08T22:48:15.597 INFO:teuthology.orchestra.run.vm00.stdout: Preparing : 1/1 2026-03-08T22:48:15.602 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 1/20 2026-03-08T22:48:15.610 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2/20 2026-03-08T22:48:15.617 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 3/20 2026-03-08T22:48:15.617 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-08T22:48:15.641 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 4/20 2026-03-08T22:48:15.648 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : parquet-libs-9.0.0-15.el9.x86_64 5/20 2026-03-08T22:48:15.649 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 6/20 2026-03-08T22:48:15.655 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-08T22:48:15.663 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 8/20 2026-03-08T22:48:15.665 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libarrow-doc-9.0.0-15.el9.noarch 9/20 2026-03-08T22:48:15.665 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-08T22:48:15.687 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-08T22:48:15.687 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-08T22:48:15.688 INFO:teuthology.orchestra.run.vm00.stdout:warning: file /etc/ceph: remove failed: No such file or directory 2026-03-08T22:48:15.688 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:15.707 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 11/20 2026-03-08T22:48:15.709 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libarrow-9.0.0-15.el9.x86_64 12/20 2026-03-08T22:48:15.717 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : re2-1:20211101-20.el9.x86_64 13/20 2026-03-08T22:48:15.723 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : lttng-ust-2.12.0-6.el9.x86_64 14/20 2026-03-08T22:48:15.731 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : thrift-0.15.0-4.el9.x86_64 15/20 2026-03-08T22:48:15.739 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libnbd-1.20.3-4.el9.x86_64 16/20 2026-03-08T22:48:15.746 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : libpmemobj-1.12.1-1.el9.x86_64 17/20 2026-03-08T22:48:15.753 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : boost-program-options-1.75.0-13.el9.x86_64 18/20 2026-03-08T22:48:15.761 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : librabbitmq-0.11.0-7.el9.x86_64 19/20 2026-03-08T22:48:15.780 INFO:teuthology.orchestra.run.vm00.stdout: Erasing : librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Running scriptlet: librdkafka-1.6.1-102.el9.x86_64 20/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : boost-program-options-1.75.0-13.el9.x86_64 1/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-9.0.0-15.el9.x86_64 2/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libarrow-doc-9.0.0-15.el9.noarch 3/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libnbd-1.20.3-4.el9.x86_64 4/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : libpmemobj-1.12.1-1.el9.x86_64 5/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librabbitmq-0.11.0-7.el9.x86_64 6/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 7/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 8/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librdkafka-1.6.1-102.el9.x86_64 9/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 10/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : lttng-ust-2.12.0-6.el9.x86_64 11/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : parquet-libs-9.0.0-15.el9.x86_64 12/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 13/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 14/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 15/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 16/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 17/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 18/20 2026-03-08T22:48:15.839 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : re2-1:20211101-20.el9.x86_64 19/20 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: Verifying : thrift-0.15.0-4.el9.x86_64 20/20 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout:Removed: 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: boost-program-options-1.75.0-13.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-9.0.0-15.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: libarrow-doc-9.0.0-15.el9.noarch 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: libnbd-1.20.3-4.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: libpmemobj-1.12.1-1.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: librabbitmq-0.11.0-7.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: librados2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: librbd1-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: librdkafka-1.6.1-102.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: librgw2-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: lttng-ust-2.12.0-6.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: parquet-libs-9.0.0-15.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: python3-rados-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: python3-rbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: python3-rgw-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: qemu-kvm-block-rbd-17:10.1.0-15.el9.x86_64 2026-03-08T22:48:15.896 INFO:teuthology.orchestra.run.vm00.stdout: rbd-fuse-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:15.897 INFO:teuthology.orchestra.run.vm00.stdout: rbd-nbd-2:19.2.3-678.ge911bdeb.el9.x86_64 2026-03-08T22:48:15.897 INFO:teuthology.orchestra.run.vm00.stdout: re2-1:20211101-20.el9.x86_64 2026-03-08T22:48:15.897 INFO:teuthology.orchestra.run.vm00.stdout: thrift-0.15.0-4.el9.x86_64 2026-03-08T22:48:15.897 INFO:teuthology.orchestra.run.vm00.stdout: 2026-03-08T22:48:15.897 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:16.097 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: librbd1 2026-03-08T22:48:16.097 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:16.099 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:16.100 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:16.100 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:16.274 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: python3-rados 2026-03-08T22:48:16.274 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:16.276 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:16.276 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:16.277 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:16.454 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: python3-rgw 2026-03-08T22:48:16.454 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:16.456 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:16.457 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:16.457 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:16.613 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: python3-cephfs 2026-03-08T22:48:16.613 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:16.615 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:16.615 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:16.616 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:16.770 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: python3-rbd 2026-03-08T22:48:16.770 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:16.772 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:16.773 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:16.773 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:16.923 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: rbd-fuse 2026-03-08T22:48:16.923 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:16.925 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:16.926 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:16.926 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:17.076 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: rbd-mirror 2026-03-08T22:48:17.076 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:17.077 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:17.078 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:17.078 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:17.223 INFO:teuthology.orchestra.run.vm00.stdout:No match for argument: rbd-nbd 2026-03-08T22:48:17.224 INFO:teuthology.orchestra.run.vm00.stderr:No packages marked for removal. 2026-03-08T22:48:17.225 INFO:teuthology.orchestra.run.vm00.stdout:Dependencies resolved. 2026-03-08T22:48:17.226 INFO:teuthology.orchestra.run.vm00.stdout:Nothing to do. 2026-03-08T22:48:17.226 INFO:teuthology.orchestra.run.vm00.stdout:Complete! 2026-03-08T22:48:17.244 DEBUG:teuthology.orchestra.run.vm00:> sudo yum clean all 2026-03-08T22:48:17.373 INFO:teuthology.orchestra.run.vm00.stdout:56 files removed 2026-03-08T22:48:17.392 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-08T22:48:17.414 DEBUG:teuthology.orchestra.run.vm00:> sudo yum clean expire-cache 2026-03-08T22:48:17.555 INFO:teuthology.orchestra.run.vm00.stdout:Cache was expired 2026-03-08T22:48:17.556 INFO:teuthology.orchestra.run.vm00.stdout:0 files removed 2026-03-08T22:48:17.571 DEBUG:teuthology.parallel:result is None 2026-03-08T22:48:17.571 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm00.local 2026-03-08T22:48:17.571 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f /etc/yum.repos.d/ceph.repo 2026-03-08T22:48:17.592 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/yum/pluginconf.d/priorities.conf.orig /etc/yum/pluginconf.d/priorities.conf 2026-03-08T22:48:17.656 DEBUG:teuthology.parallel:result is None 2026-03-08T22:48:17.656 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-08T22:48:17.658 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-08T22:48:17.658 DEBUG:teuthology.orchestra.run.vm00:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-08T22:48:17.709 INFO:teuthology.orchestra.run.vm00.stderr:bash: line 1: ntpq: command not found 2026-03-08T22:48:17.713 INFO:teuthology.orchestra.run.vm00.stdout:MS Name/IP address Stratum Poll Reach LastRx Last sample 2026-03-08T22:48:17.713 INFO:teuthology.orchestra.run.vm00.stdout:=============================================================================== 2026-03-08T22:48:17.713 INFO:teuthology.orchestra.run.vm00.stdout:^+ vps-fra1.orleans.ddnss.de 2 6 377 7 -34us[ -34us] +/- 12ms 2026-03-08T22:48:17.713 INFO:teuthology.orchestra.run.vm00.stdout:^+ ntp01.pingless.com 2 6 377 9 +600us[ +600us] +/- 13ms 2026-03-08T22:48:17.713 INFO:teuthology.orchestra.run.vm00.stdout:^+ ntp5.kernfusion.at 2 6 377 7 -372us[ -372us] +/- 17ms 2026-03-08T22:48:17.713 INFO:teuthology.orchestra.run.vm00.stdout:^* ntp1.intra2net.com 2 6 377 10 -190us[ -217us] +/- 10ms 2026-03-08T22:48:17.713 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-08T22:48:17.715 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-08T22:48:17.715 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-08T22:48:17.717 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-08T22:48:17.719 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-08T22:48:17.720 INFO:teuthology.task.internal:Duration was 599.289696 seconds 2026-03-08T22:48:17.721 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-08T22:48:17.722 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-08T22:48:17.722 DEBUG:teuthology.orchestra.run.vm00:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-08T22:48:17.788 INFO:teuthology.orchestra.run.vm00.stderr:Redirecting to /bin/systemctl restart rsyslog.service 2026-03-08T22:48:17.929 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-08T22:48:17.929 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm00.local 2026-03-08T22:48:17.929 DEBUG:teuthology.orchestra.run.vm00:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-08T22:48:17.991 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-08T22:48:17.991 DEBUG:teuthology.orchestra.run.vm00:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-08T22:48:18.431 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-08T22:48:18.431 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-08T22:48:18.456 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-08T22:48:18.457 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-08T22:48:18.457 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-08T22:48:18.457 INFO:teuthology.orchestra.run.vm00.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-08T22:48:18.457 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-08T22:48:18.605 INFO:teuthology.orchestra.run.vm00.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 98.5% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-08T22:48:18.608 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-08T22:48:18.611 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-08T22:48:18.611 DEBUG:teuthology.orchestra.run.vm00:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-08T22:48:18.671 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-08T22:48:18.674 DEBUG:teuthology.orchestra.run.vm00:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-08T22:48:18.734 INFO:teuthology.orchestra.run.vm00.stdout:kernel.core_pattern = core 2026-03-08T22:48:18.744 DEBUG:teuthology.orchestra.run.vm00:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-08T22:48:18.797 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-08T22:48:18.797 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-08T22:48:18.799 INFO:teuthology.task.internal:Transferring archived files... 2026-03-08T22:48:18.799 DEBUG:teuthology.misc:Transferring archived files from vm00:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-08_21:49:43-rados:standalone-squid-none-default-vps/279/remote/vm00 2026-03-08T22:48:18.799 DEBUG:teuthology.orchestra.run.vm00:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-08T22:48:18.863 INFO:teuthology.task.internal:Removing archive directory... 2026-03-08T22:48:18.863 DEBUG:teuthology.orchestra.run.vm00:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-08T22:48:18.916 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-08T22:48:18.919 INFO:teuthology.task.internal:Not uploading archives. 2026-03-08T22:48:18.919 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-08T22:48:18.921 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-08T22:48:18.921 DEBUG:teuthology.orchestra.run.vm00:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-08T22:48:18.973 INFO:teuthology.orchestra.run.vm00.stdout: 8532145 0 drwxr-xr-x 2 ubuntu ubuntu 6 Mar 8 22:48 /home/ubuntu/cephtest 2026-03-08T22:48:18.974 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-08T22:48:18.979 INFO:teuthology.run:Summary data: description: rados:standalone/{supported-random-distro$/{centos_latest} workloads/mon-stretch} duration: 599.2896964550018 flavor: default owner: kyr success: true 2026-03-08T22:48:18.979 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-08T22:48:18.994 INFO:teuthology.run:pass