2026-03-07T10:45:09.765 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-07T10:45:09.769 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-07T10:45:09.786 INFO:teuthology.run:Config: archive_path: /archive/irq0-2026-03-07_10:43:39-orch:cephadm:workunits-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps/22 branch: cobaltcore-storage-v19.2.3-fasttrack-5 description: orch:cephadm:workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} email: null first_in_suite: false flavor: default job_id: '22' ktype: distro last_in_suite: false machine_type: vps name: irq0-2026-03-07_10:43:39-orch:cephadm:workunits-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps no_nested_subset: false os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: cobaltcore-storage-v19.2.3-fasttrack-5 ansible.cephlab: branch: main repo: https://github.com/kshtsk/ceph-cm-ansible.git skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 3 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: true mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) sha1: 340d3c24fc6ae7529322dc7ccee6c6cb2589da0a ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} cephadm: cephadm_binary_url: https://download.ceph.com/rpm-19.2.3/el9/noarch/cephadm containers: image: harbor.clyso.com/custom-ceph/ceph/ceph:cobaltcore-storage-v19.2.3-fasttrack-5 install: ceph: flavor: default sha1: 340d3c24fc6ae7529322dc7ccee6c6cb2589da0a extra_system_packages: deb: - python3-xmltodict - s3cmd rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - s3cmd repos: - name: ceph-source priority: 1 url: https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-39-g340d3c24fc6/el9.clyso/SRPMS - name: ceph-noarch priority: 1 url: https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-39-g340d3c24fc6/el9.clyso/noarch - name: ceph priority: 1 url: https://s3.clyso.com/ces-packages/components/ceph/rpm-19.2.3-39-g340d3c24fc6/el9.clyso/x86_64 workunit: branch: tt-fasttrack-5-workunits sha1: c00b45d1ae607078ce5f9bef6b691d18bc82a838 owner: irq0 priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - mon.a - mgr.x - osd.0 - client.0 seed: 8363 sha1: 340d3c24fc6ae7529322dc7ccee6c6cb2589da0a sleep_before_teardown: 0 subset: 1/64 suite: orch:cephadm:workunits suite_branch: tt-fasttrack-5-workunits suite_path: /home/teuthos/src/github.com_kshtsk_ceph_c00b45d1ae607078ce5f9bef6b691d18bc82a838/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: c00b45d1ae607078ce5f9bef6b691d18bc82a838 targets: vm06.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDbX4HTWfdXyryn3Y+3cx89NmSKQnu4O2wVbpj1sj1dly49YFTce2TqCnzqFXA4yCbAp+8AfVeY/8xfFiAa94JQ= tasks: - exec: mon.a: - yum install -y python3 || apt install -y python3 - workunit: clients: client.0: - cephadm/test_cephadm.sh no_coverage_and_limits: true teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-07_10:43:39 tube: vps user: irq0 verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.2764 2026-03-07T10:45:09.786 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_c00b45d1ae607078ce5f9bef6b691d18bc82a838/qa; will attempt to use it 2026-03-07T10:45:09.787 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_c00b45d1ae607078ce5f9bef6b691d18bc82a838/qa/tasks 2026-03-07T10:45:09.787 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-07T10:45:09.787 INFO:teuthology.task.internal:Saving configuration 2026-03-07T10:45:09.790 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-07T10:45:09.791 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-07T10:45:09.797 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm06.local', 'description': '/archive/irq0-2026-03-07_10:43:39-orch:cephadm:workunits-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps/22', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-07 10:44:21.058111', 'locked_by': 'irq0', 'mac_address': '52:55:00:00:00:06', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDbX4HTWfdXyryn3Y+3cx89NmSKQnu4O2wVbpj1sj1dly49YFTce2TqCnzqFXA4yCbAp+8AfVeY/8xfFiAa94JQ='} 2026-03-07T10:45:09.797 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-07T10:45:09.798 INFO:teuthology.task.internal:roles: ubuntu@vm06.local - ['mon.a', 'mgr.x', 'osd.0', 'client.0'] 2026-03-07T10:45:09.798 INFO:teuthology.run_tasks:Running task console_log... 2026-03-07T10:45:09.803 DEBUG:teuthology.task.console_log:vm06 does not support IPMI; excluding 2026-03-07T10:45:09.803 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7f82f48dba30>, signals=[15]) 2026-03-07T10:45:09.803 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-07T10:45:09.803 INFO:teuthology.task.internal:Opening connections... 2026-03-07T10:45:09.803 DEBUG:teuthology.task.internal:connecting to ubuntu@vm06.local 2026-03-07T10:45:09.804 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-07T10:45:09.866 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-07T10:45:09.867 DEBUG:teuthology.orchestra.run.vm06:> uname -m 2026-03-07T10:45:09.937 INFO:teuthology.orchestra.run.vm06.stdout:x86_64 2026-03-07T10:45:09.937 DEBUG:teuthology.orchestra.run.vm06:> cat /etc/os-release 2026-03-07T10:45:09.982 INFO:teuthology.orchestra.run.vm06.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-07T10:45:09.982 INFO:teuthology.orchestra.run.vm06.stdout:NAME="Ubuntu" 2026-03-07T10:45:09.982 INFO:teuthology.orchestra.run.vm06.stdout:VERSION_ID="22.04" 2026-03-07T10:45:09.982 INFO:teuthology.orchestra.run.vm06.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-07T10:45:09.982 INFO:teuthology.orchestra.run.vm06.stdout:VERSION_CODENAME=jammy 2026-03-07T10:45:09.982 INFO:teuthology.orchestra.run.vm06.stdout:ID=ubuntu 2026-03-07T10:45:09.983 INFO:teuthology.orchestra.run.vm06.stdout:ID_LIKE=debian 2026-03-07T10:45:09.983 INFO:teuthology.orchestra.run.vm06.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-07T10:45:09.983 INFO:teuthology.orchestra.run.vm06.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-07T10:45:09.983 INFO:teuthology.orchestra.run.vm06.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-07T10:45:09.983 INFO:teuthology.orchestra.run.vm06.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-07T10:45:09.983 INFO:teuthology.orchestra.run.vm06.stdout:UBUNTU_CODENAME=jammy 2026-03-07T10:45:09.983 INFO:teuthology.lock.ops:Updating vm06.local on lock server 2026-03-07T10:45:09.992 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-07T10:45:09.993 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-07T10:45:09.994 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-07T10:45:09.994 DEBUG:teuthology.orchestra.run.vm06:> test '!' -e /home/ubuntu/cephtest 2026-03-07T10:45:10.026 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-07T10:45:10.027 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-07T10:45:10.027 DEBUG:teuthology.orchestra.run.vm06:> test -z $(ls -A /var/lib/ceph) 2026-03-07T10:45:10.070 INFO:teuthology.orchestra.run.vm06.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-07T10:45:10.071 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-07T10:45:10.078 DEBUG:teuthology.orchestra.run.vm06:> test -e /ceph-qa-ready 2026-03-07T10:45:10.114 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-07T10:45:10.348 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-07T10:45:10.349 INFO:teuthology.task.internal:Creating test directory... 2026-03-07T10:45:10.349 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-07T10:45:10.352 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-07T10:45:10.354 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-07T10:45:10.354 INFO:teuthology.task.internal:Creating archive directory... 2026-03-07T10:45:10.355 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-07T10:45:10.399 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-07T10:45:10.400 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-07T10:45:10.400 DEBUG:teuthology.orchestra.run.vm06:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-07T10:45:10.441 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-07T10:45:10.442 DEBUG:teuthology.orchestra.run.vm06:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-07T10:45:10.490 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-07T10:45:10.495 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-07T10:45:10.495 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-07T10:45:10.497 INFO:teuthology.task.internal:Configuring sudo... 2026-03-07T10:45:10.497 DEBUG:teuthology.orchestra.run.vm06:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-07T10:45:10.547 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-07T10:45:10.549 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-07T10:45:10.549 DEBUG:teuthology.orchestra.run.vm06:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-07T10:45:10.590 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-07T10:45:10.634 DEBUG:teuthology.orchestra.run.vm06:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-07T10:45:10.678 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-07T10:45:10.678 DEBUG:teuthology.orchestra.run.vm06:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-07T10:45:10.727 DEBUG:teuthology.orchestra.run.vm06:> sudo service rsyslog restart 2026-03-07T10:45:10.782 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-07T10:45:10.784 INFO:teuthology.task.internal:Starting timer... 2026-03-07T10:45:10.784 INFO:teuthology.run_tasks:Running task pcp... 2026-03-07T10:45:10.786 INFO:teuthology.run_tasks:Running task selinux... 2026-03-07T10:45:10.788 INFO:teuthology.task.selinux:Excluding vm06: VMs are not yet supported 2026-03-07T10:45:10.788 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-07T10:45:10.788 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-07T10:45:10.788 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-07T10:45:10.788 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-07T10:45:10.789 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'repo': 'https://github.com/kshtsk/ceph-cm-ansible.git', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-07T10:45:10.789 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main to origin/main 2026-03-07T10:45:10.795 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-07T10:45:10.795 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventory3aaqr4i0 --limit vm06.local /home/teuthos/src/github.com_kshtsk_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-07T10:48:04.483 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm06.local')] 2026-03-07T10:48:04.484 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm06.local' 2026-03-07T10:48:04.484 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm06.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-07T10:48:04.546 DEBUG:teuthology.orchestra.run.vm06:> true 2026-03-07T10:48:04.749 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm06.local' 2026-03-07T10:48:04.749 INFO:teuthology.run_tasks:Running task clock... 2026-03-07T10:48:04.751 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-07T10:48:04.751 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-07T10:48:04.751 DEBUG:teuthology.orchestra.run.vm06:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-07T10:48:04.807 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-07T10:48:04.808 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: Command line: ntpd -gq 2026-03-07T10:48:04.808 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: ---------------------------------------------------- 2026-03-07T10:48:04.808 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: ntp-4 is maintained by Network Time Foundation, 2026-03-07T10:48:04.808 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-07T10:48:04.808 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: corporation. Support and training for ntp-4 are 2026-03-07T10:48:04.808 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: available at https://www.nwtime.org/support 2026-03-07T10:48:04.808 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: ---------------------------------------------------- 2026-03-07T10:48:04.808 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: proto: precision = 0.029 usec (-25) 2026-03-07T10:48:04.808 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: basedate set to 2022-02-04 2026-03-07T10:48:04.808 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: gps base set to 2022-02-06 (week 2196) 2026-03-07T10:48:04.809 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-07T10:48:04.809 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-07T10:48:04.809 INFO:teuthology.orchestra.run.vm06.stderr: 7 Mar 10:48:04 ntpd[15630]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 70 days ago 2026-03-07T10:48:04.810 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: Listen and drop on 0 v6wildcard [::]:123 2026-03-07T10:48:04.810 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-07T10:48:04.810 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: Listen normally on 2 lo 127.0.0.1:123 2026-03-07T10:48:04.810 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: Listen normally on 3 ens3 192.168.123.106:123 2026-03-07T10:48:04.810 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: Listen normally on 4 lo [::1]:123 2026-03-07T10:48:04.810 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:6%2]:123 2026-03-07T10:48:04.810 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:04 ntpd[15630]: Listening on routing socket on fd #22 for interface updates 2026-03-07T10:48:05.809 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:05 ntpd[15630]: Soliciting pool server 77.90.0.148 2026-03-07T10:48:06.807 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:06 ntpd[15630]: Soliciting pool server 46.41.21.10 2026-03-07T10:48:06.808 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:06 ntpd[15630]: Soliciting pool server 217.154.182.60 2026-03-07T10:48:07.807 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:07 ntpd[15630]: Soliciting pool server 138.201.117.193 2026-03-07T10:48:07.807 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:07 ntpd[15630]: Soliciting pool server 85.215.166.214 2026-03-07T10:48:08.211 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:08 ntpd[15630]: Soliciting pool server 85.215.189.120 2026-03-07T10:48:08.807 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:08 ntpd[15630]: Soliciting pool server 185.228.138.224 2026-03-07T10:48:08.807 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:08 ntpd[15630]: Soliciting pool server 195.201.20.16 2026-03-07T10:48:08.807 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:08 ntpd[15630]: Soliciting pool server 78.46.56.170 2026-03-07T10:48:08.807 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:08 ntpd[15630]: Soliciting pool server 212.132.108.186 2026-03-07T10:48:09.806 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:09 ntpd[15630]: Soliciting pool server 79.133.44.141 2026-03-07T10:48:09.806 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:09 ntpd[15630]: Soliciting pool server 85.121.52.237 2026-03-07T10:48:09.806 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:09 ntpd[15630]: Soliciting pool server 45.9.61.155 2026-03-07T10:48:09.807 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:09 ntpd[15630]: Soliciting pool server 91.189.91.157 2026-03-07T10:48:10.806 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:10 ntpd[15630]: Soliciting pool server 185.125.190.58 2026-03-07T10:48:10.806 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:10 ntpd[15630]: Soliciting pool server 57.129.38.82 2026-03-07T10:48:10.806 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:10 ntpd[15630]: Soliciting pool server 134.60.111.110 2026-03-07T10:48:13.827 INFO:teuthology.orchestra.run.vm06.stdout: 7 Mar 10:48:13 ntpd[15630]: ntpd: time slew -0.012932 s 2026-03-07T10:48:13.827 INFO:teuthology.orchestra.run.vm06.stdout:ntpd: time slew -0.012932s 2026-03-07T10:48:13.845 INFO:teuthology.orchestra.run.vm06.stdout: remote refid st t when poll reach delay offset jitter 2026-03-07T10:48:13.845 INFO:teuthology.orchestra.run.vm06.stdout:============================================================================== 2026-03-07T10:48:13.845 INFO:teuthology.orchestra.run.vm06.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-07T10:48:13.846 INFO:teuthology.orchestra.run.vm06.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-07T10:48:13.846 INFO:teuthology.orchestra.run.vm06.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-07T10:48:13.846 INFO:teuthology.orchestra.run.vm06.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-07T10:48:13.846 INFO:teuthology.orchestra.run.vm06.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-07T10:48:13.846 INFO:teuthology.run_tasks:Running task exec... 2026-03-07T10:48:13.848 INFO:teuthology.task.exec:Executing custom commands... 2026-03-07T10:48:13.848 INFO:teuthology.task.exec:Running commands on role mon.a host ubuntu@vm06.local 2026-03-07T10:48:13.848 DEBUG:teuthology.orchestra.run.vm06:> sudo TESTDIR=/home/ubuntu/cephtest bash -c 'yum install -y python3 || apt install -y python3' 2026-03-07T10:48:13.894 INFO:teuthology.orchestra.run.vm06.stderr:bash: line 1: yum: command not found 2026-03-07T10:48:13.897 INFO:teuthology.orchestra.run.vm06.stderr: 2026-03-07T10:48:13.897 INFO:teuthology.orchestra.run.vm06.stderr:WARNING: apt does not have a stable CLI interface. Use with caution in scripts. 2026-03-07T10:48:13.897 INFO:teuthology.orchestra.run.vm06.stderr: 2026-03-07T10:48:13.920 INFO:teuthology.orchestra.run.vm06.stdout:Reading package lists... 2026-03-07T10:48:14.079 INFO:teuthology.orchestra.run.vm06.stdout:Building dependency tree... 2026-03-07T10:48:14.079 INFO:teuthology.orchestra.run.vm06.stdout:Reading state information... 2026-03-07T10:48:14.182 INFO:teuthology.orchestra.run.vm06.stdout:python3 is already the newest version (3.10.6-1~22.04.1). 2026-03-07T10:48:14.182 INFO:teuthology.orchestra.run.vm06.stdout:python3 set to manually installed. 2026-03-07T10:48:14.182 INFO:teuthology.orchestra.run.vm06.stdout:The following packages were automatically installed and are no longer required: 2026-03-07T10:48:14.182 INFO:teuthology.orchestra.run.vm06.stdout: kpartx libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-07T10:48:14.183 INFO:teuthology.orchestra.run.vm06.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-07T10:48:14.256 INFO:teuthology.orchestra.run.vm06.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-07T10:48:14.312 INFO:teuthology.run_tasks:Running task workunit... 2026-03-07T10:48:14.316 INFO:tasks.workunit:Pulling workunits from ref c00b45d1ae607078ce5f9bef6b691d18bc82a838 2026-03-07T10:48:14.316 INFO:tasks.workunit:Making a separate scratch dir for every client... 2026-03-07T10:48:14.316 DEBUG:teuthology.orchestra.run.vm06:> stat -- /home/ubuntu/cephtest/mnt.0 2026-03-07T10:48:14.359 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-07T10:48:14.359 INFO:teuthology.orchestra.run.vm06.stderr:stat: cannot statx '/home/ubuntu/cephtest/mnt.0': No such file or directory 2026-03-07T10:48:14.359 DEBUG:teuthology.orchestra.run.vm06:> mkdir -- /home/ubuntu/cephtest/mnt.0 2026-03-07T10:48:14.402 INFO:tasks.workunit:Created dir /home/ubuntu/cephtest/mnt.0 2026-03-07T10:48:14.403 DEBUG:teuthology.orchestra.run.vm06:> cd -- /home/ubuntu/cephtest/mnt.0 && mkdir -- client.0 2026-03-07T10:48:14.446 INFO:tasks.workunit:timeout=3h 2026-03-07T10:48:14.446 INFO:tasks.workunit:cleanup=True 2026-03-07T10:48:14.446 DEBUG:teuthology.orchestra.run.vm06:> rm -rf /home/ubuntu/cephtest/clone.client.0 && git clone https://github.com/kshtsk/ceph.git /home/ubuntu/cephtest/clone.client.0 && cd /home/ubuntu/cephtest/clone.client.0 && git checkout c00b45d1ae607078ce5f9bef6b691d18bc82a838 2026-03-07T10:48:14.491 INFO:tasks.workunit.client.0.vm06.stderr:Cloning into '/home/ubuntu/cephtest/clone.client.0'... 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr:Note: switching to 'c00b45d1ae607078ce5f9bef6b691d18bc82a838'. 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr: 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr:You are in 'detached HEAD' state. You can look around, make experimental 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr:changes and commit them, and you can discard any commits you make in this 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr:state without impacting any branches by switching back to a branch. 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr: 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr:If you want to create a new branch to retain commits you create, you may 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr:do so (now or later) by using -c with the switch command. Example: 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr: 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr: git switch -c 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr: 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr:Or undo this operation with: 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr: 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr: git switch - 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr: 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr:Turn off this advice by setting config variable advice.detachedHead to false 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr: 2026-03-07T10:49:00.655 INFO:tasks.workunit.client.0.vm06.stderr:HEAD is now at c00b45d1ae6 qa/s/o:c:w: use no_coverage_and_limits 2026-03-07T10:49:00.661 DEBUG:teuthology.orchestra.run.vm06:> cd -- /home/ubuntu/cephtest/clone.client.0/qa/workunits && if test -e Makefile ; then make ; fi && find -executable -type f -printf '%P\0' >/home/ubuntu/cephtest/workunits.list.client.0 2026-03-07T10:49:00.705 INFO:tasks.workunit.client.0.vm06.stdout:for d in direct_io fs ; do ( cd $d ; make all ) ; done 2026-03-07T10:49:00.707 INFO:tasks.workunit.client.0.vm06.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-07T10:49:00.707 INFO:tasks.workunit.client.0.vm06.stdout:cc -Wall -Wextra -D_GNU_SOURCE direct_io_test.c -o direct_io_test 2026-03-07T10:49:00.743 INFO:tasks.workunit.client.0.vm06.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_sync_io.c -o test_sync_io 2026-03-07T10:49:00.772 INFO:tasks.workunit.client.0.vm06.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_short_dio_read.c -o test_short_dio_read 2026-03-07T10:49:00.794 INFO:tasks.workunit.client.0.vm06.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/direct_io' 2026-03-07T10:49:00.795 INFO:tasks.workunit.client.0.vm06.stdout:make[1]: Entering directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-07T10:49:00.795 INFO:tasks.workunit.client.0.vm06.stdout:cc -Wall -Wextra -D_GNU_SOURCE test_o_trunc.c -o test_o_trunc 2026-03-07T10:49:00.817 INFO:tasks.workunit.client.0.vm06.stdout:make[1]: Leaving directory '/home/ubuntu/cephtest/clone.client.0/qa/workunits/fs' 2026-03-07T10:49:00.820 DEBUG:teuthology.orchestra.run.vm06:> set -ex 2026-03-07T10:49:00.820 DEBUG:teuthology.orchestra.run.vm06:> dd if=/home/ubuntu/cephtest/workunits.list.client.0 of=/dev/stdout 2026-03-07T10:49:00.864 INFO:tasks.workunit:Running workunits matching cephadm/test_cephadm.sh on client.0... 2026-03-07T10:49:00.865 INFO:tasks.workunit:Running workunit cephadm/test_cephadm.sh... 2026-03-07T10:49:00.865 DEBUG:teuthology.orchestra.run.vm06:workunit test cephadm/test_cephadm.sh> mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c00b45d1ae607078ce5f9bef6b691d18bc82a838 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh 2026-03-07T10:49:00.910 INFO:tasks.workunit.client.0.vm06.stderr:++ basename /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh 2026-03-07T10:49:00.911 INFO:tasks.workunit.client.0.vm06.stderr:+ SCRIPT_NAME=test_cephadm.sh 2026-03-07T10:49:00.911 INFO:tasks.workunit.client.0.vm06.stderr:+++ dirname /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh 2026-03-07T10:49:00.911 INFO:tasks.workunit.client.0.vm06.stderr:++ cd /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:++ pwd 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ SCRIPT_DIR=/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ '[' -z '' ']' 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ CLEANUP=true 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ FSID=00000000-0000-0000-0000-0000deadbeef 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ IMAGE_MAIN=quay.ceph.io/ceph-ci/ceph:main 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ IMAGE_QUINCY=quay.ceph.io/ceph-ci/ceph:quincy 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ IMAGE_REEF=quay.ceph.io/ceph-ci/ceph:reef 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ IMAGE_SQUID=quay.ceph.io/ceph-ci/ceph:squid 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ IMAGE_DEFAULT=quay.ceph.io/ceph-ci/ceph:squid 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ OSD_IMAGE_NAME=test_cephadm_osd.img 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ OSD_IMAGE_SIZE=6G 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ OSD_TO_CREATE=2 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ OSD_VG_NAME=test_cephadm 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ OSD_LV_NAME=test_cephadm 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ '[' -d '' ']' 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:++ mktemp -d tmp.test_cephadm.sh.XXXXXX 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ TMPDIR=tmp.test_cephadm.sh.uibCgt 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:+ '[' -d '' ']' 2026-03-07T10:49:00.912 INFO:tasks.workunit.client.0.vm06.stderr:++ mktemp -d tmp.test_cephadm.sh.XXXXXX 2026-03-07T10:49:00.913 INFO:tasks.workunit.client.0.vm06.stderr:+ TMPDIR_TEST_MULTIPLE_MOUNTS=tmp.test_cephadm.sh.c5UA8L 2026-03-07T10:49:00.913 INFO:tasks.workunit.client.0.vm06.stderr:+ CEPHADM_SRC_DIR=/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm 2026-03-07T10:49:00.913 INFO:tasks.workunit.client.0.vm06.stderr:+ CEPHADM_SAMPLES_DIR=/home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm/samples 2026-03-07T10:49:00.913 INFO:tasks.workunit.client.0.vm06.stderr:+ '[' -z '' ']' 2026-03-07T10:49:00.913 INFO:tasks.workunit.client.0.vm06.stderr:+ SUDO=sudo 2026-03-07T10:49:00.913 INFO:tasks.workunit.client.0.vm06.stderr:+ '[' -z '' ']' 2026-03-07T10:49:00.913 INFO:tasks.workunit.client.0.vm06.stderr:+ command -v cephadm 2026-03-07T10:49:00.913 INFO:tasks.workunit.client.0.vm06.stderr:+ '[' -z '' ']' 2026-03-07T10:49:00.913 INFO:tasks.workunit.client.0.vm06.stderr:++ mktemp -p tmp.test_cephadm.sh.uibCgt tmp.cephadm.XXXXXX 2026-03-07T10:49:00.914 INFO:tasks.workunit.client.0.vm06.stderr:+ CEPHADM=tmp.test_cephadm.sh.uibCgt/tmp.cephadm.RxMHhF 2026-03-07T10:49:00.914 INFO:tasks.workunit.client.0.vm06.stderr:+ /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm/build.sh tmp.test_cephadm.sh.uibCgt/tmp.cephadm.RxMHhF 2026-03-07T10:49:00.914 INFO:tasks.workunit.client.0.vm06.stderr:+++ dirname /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm/build.sh 2026-03-07T10:49:00.915 INFO:tasks.workunit.client.0.vm06.stderr:++ cd /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm 2026-03-07T10:49:00.915 INFO:tasks.workunit.client.0.vm06.stderr:++ pwd 2026-03-07T10:49:00.915 INFO:tasks.workunit.client.0.vm06.stderr:+ SCRIPT_DIR=/home/ubuntu/cephtest/clone.client.0/src/cephadm 2026-03-07T10:49:00.915 INFO:tasks.workunit.client.0.vm06.stderr:+ exec python3 /home/ubuntu/cephtest/clone.client.0/src/cephadm/build.py tmp.test_cephadm.sh.uibCgt/tmp.cephadm.RxMHhF 2026-03-07T10:49:00.935 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Python Version: 3.10.12 2026-03-07T10:49:00.935 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Argument: dest='tmp.test_cephadm.sh.uibCgt/tmp.cephadm.RxMHhF' 2026-03-07T10:49:00.935 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Argument: source=None 2026-03-07T10:49:00.935 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Argument: python=None 2026-03-07T10:49:00.935 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Argument: version_vars=None 2026-03-07T10:49:00.935 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Argument: pip_use_venv='auto' 2026-03-07T10:49:00.935 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Argument: bundled_dependencies='pip' 2026-03-07T10:49:00.936 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Source Dir: /home/ubuntu/cephtest/clone.client.0/src/cephadm 2026-03-07T10:49:00.936 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Destination Path: /home/ubuntu/cephtest/mnt.0/client.0/tmp/tmp.test_cephadm.sh.uibCgt/tmp.cephadm.RxMHhF 2026-03-07T10:49:00.936 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Installing dependencies using pip 2026-03-07T10:49:00.936 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Running cmd: /usr/bin/python3 -m venv --help 2026-03-07T10:49:00.955 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Attempting to create a virtualenv 2026-03-07T10:49:00.955 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Running cmd: /usr/bin/python3 -m venv /tmp/tmpy2jkx7rm.cephadm.build/deps/_venv_ 2026-03-07T10:49:02.361 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Running cmd: /tmp/tmpy2jkx7rm.cephadm.build/deps/_venv_/bin/python3 -m pip install -U pip 2026-03-07T10:49:02.537 INFO:tasks.workunit.client.0.vm06.stdout:Requirement already satisfied: pip in /tmp/tmpy2jkx7rm.cephadm.build/deps/_venv_/lib/python3.10/site-packages (22.0.2) 2026-03-07T10:49:02.617 INFO:tasks.workunit.client.0.vm06.stdout:Collecting pip 2026-03-07T10:49:02.658 INFO:tasks.workunit.client.0.vm06.stdout: Downloading pip-26.0.1-py3-none-any.whl (1.8 MB) 2026-03-07T10:49:02.711 INFO:tasks.workunit.client.0.vm06.stdout: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 34.7 MB/s eta 0:00:00 2026-03-07T10:49:02.726 INFO:tasks.workunit.client.0.vm06.stdout:Installing collected packages: pip 2026-03-07T10:49:02.726 INFO:tasks.workunit.client.0.vm06.stdout: Attempting uninstall: pip 2026-03-07T10:49:02.726 INFO:tasks.workunit.client.0.vm06.stdout: Found existing installation: pip 22.0.2 2026-03-07T10:49:02.854 INFO:tasks.workunit.client.0.vm06.stdout: Uninstalling pip-22.0.2: 2026-03-07T10:49:02.858 INFO:tasks.workunit.client.0.vm06.stdout: Successfully uninstalled pip-22.0.2 2026-03-07T10:49:03.234 INFO:tasks.workunit.client.0.vm06.stdout:Successfully installed pip-26.0.1 2026-03-07T10:49:03.272 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Running cmd: /tmp/tmpy2jkx7rm.cephadm.build/deps/_venv_/bin/python3 -m venv --help 2026-03-07T10:49:03.289 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Running cmd: /tmp/tmpy2jkx7rm.cephadm.build/deps/_venv_/bin/python3 -m pip install --target /tmp/tmpy2jkx7rm.cephadm.build/deps --no-binary :all: 'MarkupSafe >= 2.1.3, <2.2' 'Jinja2 >= 3.1.2, <3.2' 2026-03-07T10:49:03.531 INFO:tasks.workunit.client.0.vm06.stdout:Collecting MarkupSafe<2.2,>=2.1.3 2026-03-07T10:49:03.571 INFO:tasks.workunit.client.0.vm06.stdout: Downloading MarkupSafe-2.1.5.tar.gz (19 kB) 2026-03-07T10:49:03.581 INFO:tasks.workunit.client.0.vm06.stdout: Installing build dependencies: started 2026-03-07T10:49:04.852 INFO:tasks.workunit.client.0.vm06.stdout: Installing build dependencies: finished with status 'done' 2026-03-07T10:49:04.852 INFO:tasks.workunit.client.0.vm06.stdout: Getting requirements to build wheel: started 2026-03-07T10:49:05.006 INFO:tasks.workunit.client.0.vm06.stdout: Getting requirements to build wheel: finished with status 'done' 2026-03-07T10:49:05.006 INFO:tasks.workunit.client.0.vm06.stdout: Preparing metadata (pyproject.toml): started 2026-03-07T10:49:05.091 INFO:tasks.workunit.client.0.vm06.stdout: Preparing metadata (pyproject.toml): finished with status 'done' 2026-03-07T10:49:05.108 INFO:tasks.workunit.client.0.vm06.stdout:Collecting Jinja2<3.2,>=3.1.2 2026-03-07T10:49:05.117 INFO:tasks.workunit.client.0.vm06.stdout: Downloading jinja2-3.1.6.tar.gz (245 kB) 2026-03-07T10:49:05.157 INFO:tasks.workunit.client.0.vm06.stdout: Installing build dependencies: started 2026-03-07T10:49:05.593 INFO:tasks.workunit.client.0.vm06.stdout: Installing build dependencies: finished with status 'done' 2026-03-07T10:49:05.593 INFO:tasks.workunit.client.0.vm06.stdout: Getting requirements to build wheel: started 2026-03-07T10:49:05.640 INFO:tasks.workunit.client.0.vm06.stdout: Getting requirements to build wheel: finished with status 'done' 2026-03-07T10:49:05.641 INFO:tasks.workunit.client.0.vm06.stdout: Preparing metadata (pyproject.toml): started 2026-03-07T10:49:05.678 INFO:tasks.workunit.client.0.vm06.stdout: Preparing metadata (pyproject.toml): finished with status 'done' 2026-03-07T10:49:05.680 INFO:tasks.workunit.client.0.vm06.stdout:Building wheels for collected packages: MarkupSafe, Jinja2 2026-03-07T10:49:05.681 INFO:tasks.workunit.client.0.vm06.stdout: Building wheel for MarkupSafe (pyproject.toml): started 2026-03-07T10:49:05.786 INFO:tasks.workunit.client.0.vm06.stdout: Building wheel for MarkupSafe (pyproject.toml): finished with status 'done' 2026-03-07T10:49:05.786 INFO:tasks.workunit.client.0.vm06.stdout: Created wheel for MarkupSafe: filename=markupsafe-2.1.5-py3-none-any.whl size=9916 sha256=8a695ac19a31525baf559c399b14e37b5c2a94dea19d98414e6a64b5e351abee 2026-03-07T10:49:05.786 INFO:tasks.workunit.client.0.vm06.stdout: Stored in directory: /home/ubuntu/.cache/pip/wheels/b6/62/2a/14e4ae067769a57af54289f65f20e0b76a5130cd7a19b7e8f9 2026-03-07T10:49:05.788 INFO:tasks.workunit.client.0.vm06.stdout: Building wheel for Jinja2 (pyproject.toml): started 2026-03-07T10:49:05.836 INFO:tasks.workunit.client.0.vm06.stdout: Building wheel for Jinja2 (pyproject.toml): finished with status 'done' 2026-03-07T10:49:05.836 INFO:tasks.workunit.client.0.vm06.stdout: Created wheel for Jinja2: filename=jinja2-3.1.6-py3-none-any.whl size=134897 sha256=005a2fc379b662ac007c271a8122f7a7db49bb67fbedfc85db50fccde34d839f 2026-03-07T10:49:05.836 INFO:tasks.workunit.client.0.vm06.stdout: Stored in directory: /home/ubuntu/.cache/pip/wheels/9d/38/db/3e90209164678f47dc069612a890264e4c9c48e3bceb3fea12 2026-03-07T10:49:05.837 INFO:tasks.workunit.client.0.vm06.stdout:Successfully built MarkupSafe Jinja2 2026-03-07T10:49:05.840 INFO:tasks.workunit.client.0.vm06.stdout:Installing collected packages: MarkupSafe, Jinja2 2026-03-07T10:49:05.883 INFO:tasks.workunit.client.0.vm06.stdout: 2026-03-07T10:49:05.884 INFO:tasks.workunit.client.0.vm06.stdout:Successfully installed Jinja2-3.1.6 MarkupSafe-2.1.5 2026-03-07T10:49:05.993 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Running cmd: /tmp/tmpy2jkx7rm.cephadm.build/deps/_venv_/bin/python3 -m pip install --target /tmp/tmpy2jkx7rm.cephadm.build/deps 'PyYAML >= 6.0, <6.1' 2026-03-07T10:49:06.218 INFO:tasks.workunit.client.0.vm06.stdout:Collecting PyYAML<6.1,>=6.0 2026-03-07T10:49:06.259 INFO:tasks.workunit.client.0.vm06.stdout: Downloading pyyaml-6.0.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl.metadata (2.4 kB) 2026-03-07T10:49:06.270 INFO:tasks.workunit.client.0.vm06.stdout:Downloading pyyaml-6.0.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl (770 kB) 2026-03-07T10:49:06.308 INFO:tasks.workunit.client.0.vm06.stdout: ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 770.3/770.3 kB 33.7 MB/s 0:00:00 2026-03-07T10:49:06.320 INFO:tasks.workunit.client.0.vm06.stdout:Installing collected packages: PyYAML 2026-03-07T10:49:06.348 INFO:tasks.workunit.client.0.vm06.stdout:Successfully installed PyYAML-6.0.3 2026-03-07T10:49:06.375 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Running cmd: /tmp/tmpy2jkx7rm.cephadm.build/deps/_venv_/bin/python3 -m pip list --format=json --path /tmp/tmpy2jkx7rm.cephadm.build/deps 2026-03-07T10:49:06.492 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Copying contents 2026-03-07T10:49:06.495 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Byte-compiling py to pyc 2026-03-07T10:49:06.558 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Constructing the zipapp file 2026-03-07T10:49:06.608 INFO:tasks.workunit.client.0.vm06.stdout:cephadm/build.py: Zipapp created with compression 2026-03-07T10:49:06.620 INFO:tasks.workunit.client.0.vm06.stderr:+ NO_BUILD_INFO=1 2026-03-07T10:49:06.620 INFO:tasks.workunit.client.0.vm06.stderr:+ '[' -x tmp.test_cephadm.sh.uibCgt/tmp.cephadm.RxMHhF ']' 2026-03-07T10:49:06.621 INFO:tasks.workunit.client.0.vm06.stderr:+ CEPHADM_ARGS=' --image quay.ceph.io/ceph-ci/ceph:squid' 2026-03-07T10:49:06.621 INFO:tasks.workunit.client.0.vm06.stderr:+ CEPHADM_BIN=tmp.test_cephadm.sh.uibCgt/tmp.cephadm.RxMHhF 2026-03-07T10:49:06.621 INFO:tasks.workunit.client.0.vm06.stderr:+ CEPHADM='sudo tmp.test_cephadm.sh.uibCgt/tmp.cephadm.RxMHhF --image quay.ceph.io/ceph-ci/ceph:squid' 2026-03-07T10:49:06.621 INFO:tasks.workunit.client.0.vm06.stderr:+ sudo tmp.test_cephadm.sh.uibCgt/tmp.cephadm.RxMHhF --image quay.ceph.io/ceph-ci/ceph:squid rm-cluster --fsid 00000000-0000-0000-0000-0000deadbeef --force 2026-03-07T10:49:06.695 INFO:tasks.workunit.client.0.vm06.stdout:Deleting cluster with fsid: 00000000-0000-0000-0000-0000deadbeef 2026-03-07T10:49:07.745 INFO:tasks.workunit.client.0.vm06.stderr:Traceback (most recent call last): 2026-03-07T10:49:07.745 INFO:tasks.workunit.client.0.vm06.stderr: File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main 2026-03-07T10:49:07.745 INFO:tasks.workunit.client.0.vm06.stderr: return _run_code(code, main_globals, None, 2026-03-07T10:49:07.745 INFO:tasks.workunit.client.0.vm06.stderr: File "/usr/lib/python3.10/runpy.py", line 86, in _run_code 2026-03-07T10:49:07.745 INFO:tasks.workunit.client.0.vm06.stderr: exec(code, run_globals) 2026-03-07T10:49:07.746 INFO:tasks.workunit.client.0.vm06.stderr: File "/tmp/tmpy2jkx7rm.cephadm.build/app/__main__.py", line 5581, in 2026-03-07T10:49:07.746 INFO:tasks.workunit.client.0.vm06.stderr: File "/tmp/tmpy2jkx7rm.cephadm.build/app/__main__.py", line 5569, in main 2026-03-07T10:49:07.746 INFO:tasks.workunit.client.0.vm06.stderr: File "/tmp/tmpy2jkx7rm.cephadm.build/app/__main__.py", line 4327, in command_rm_cluster 2026-03-07T10:49:07.746 INFO:tasks.workunit.client.0.vm06.stderr: File "/tmp/tmpy2jkx7rm.cephadm.build/app/__main__.py", line 4391, in _rm_cluster 2026-03-07T10:49:07.746 INFO:tasks.workunit.client.0.vm06.stderr: File "/tmp/tmpy2jkx7rm.cephadm.build/app/__main__.py", line 4317, in get_ceph_cluster_count 2026-03-07T10:49:07.746 INFO:tasks.workunit.client.0.vm06.stderr:FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/ceph' 2026-03-07T10:49:07.760 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-07T10:49:07.760 INFO:tasks.workunit:Stopping ['cephadm/test_cephadm.sh'] on client.0... 2026-03-07T10:49:07.760 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2026-03-07T10:49:08.130 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 105, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 83, in run_one_task return task(**kwargs) File "/home/teuthos/src/github.com_kshtsk_ceph_c00b45d1ae607078ce5f9bef6b691d18bc82a838/qa/tasks/workunit.py", line 125, in task with parallel() as p: File "/home/teuthos/teuthology/teuthology/parallel.py", line 84, in __exit__ for result in self: File "/home/teuthos/teuthology/teuthology/parallel.py", line 98, in __next__ resurrect_traceback(result) File "/home/teuthos/teuthology/teuthology/parallel.py", line 30, in resurrect_traceback raise exc.exc_info[1] File "/home/teuthos/teuthology/teuthology/parallel.py", line 23, in capture_traceback return func(*args, **kwargs) File "/home/teuthos/src/github.com_kshtsk_ceph_c00b45d1ae607078ce5f9bef6b691d18bc82a838/qa/tasks/workunit.py", line 433, in _run_tests remote.run( File "/home/teuthos/teuthology/teuthology/orchestra/remote.py", line 575, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 461, in run r.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed (workunit test cephadm/test_cephadm.sh) on vm06 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c00b45d1ae607078ce5f9bef6b691d18bc82a838 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' 2026-03-07T10:49:08.131 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-07T10:49:08.133 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-07T10:49:08.133 DEBUG:teuthology.orchestra.run.vm06:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout: remote refid st t when poll reach delay offset jitter 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:============================================================================== 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:#router02.i-tk.d 192.168.125.22 2 u 42 64 1 48.434 -11.122 2.451 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:+mailout04.fisch 205.46.178.169 2 u 42 64 1 25.239 -7.794 3.349 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:#ip217-154-182-6 37.15.221.189 2 u 41 64 1 66.764 -12.565 3.380 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:+77.90.0.148 (14 131.188.3.220 2 u 41 64 1 23.196 -6.325 3.599 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:+ntp.b-ite.de 131.188.3.221 2 u 41 64 1 25.080 -7.071 3.379 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:+t1.ipfu.de 193.51.170.61 3 u 40 64 1 28.714 -8.616 3.170 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:+47.ip-51-75-67. 185.248.188.98 2 u 40 64 1 21.249 -6.179 3.354 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:*static.buzo.eu 100.10.69.89 2 u 40 64 1 23.490 -6.522 3.378 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:+ntp2.uni-ulm.de 129.69.253.1 2 u 36 64 1 27.987 -5.772 3.064 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:+ntp5.kernfusion 237.17.204.95 2 u 39 64 1 28.726 -5.902 3.368 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:+cloudrouter.1in 131.188.3.221 2 u 39 64 1 28.800 -9.056 3.313 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:+time2.sebhostin 127.65.222.189 2 u 39 64 1 28.931 -5.474 3.453 2026-03-07T10:49:09.189 INFO:teuthology.orchestra.run.vm06.stdout:+x1.ncomputers.o 82.64.42.185 2 u 36 64 1 32.009 -5.627 3.131 2026-03-07T10:49:09.190 INFO:teuthology.orchestra.run.vm06.stdout: alphyn.canonica 132.163.96.1 2 u 47 64 1 101.471 -14.061 0.000 2026-03-07T10:49:09.190 INFO:teuthology.orchestra.run.vm06.stdout:+byggvir.de 130.149.17.21 2 u 37 64 1 28.790 -5.395 3.388 2026-03-07T10:49:09.190 INFO:teuthology.orchestra.run.vm06.stdout:+vps-fra8.orlean 195.145.119.188 2 u 36 64 1 34.950 -5.583 2.795 2026-03-07T10:49:09.190 INFO:teuthology.orchestra.run.vm06.stdout: 185.125.190.58 37.15.221.189 2 u 45 64 1 34.698 -10.436 0.000 2026-03-07T10:49:09.190 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-07T10:49:09.192 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-07T10:49:09.192 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-07T10:49:09.193 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-07T10:49:09.195 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-07T10:49:09.196 INFO:teuthology.task.internal:Duration was 238.412563 seconds 2026-03-07T10:49:09.196 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-07T10:49:09.198 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-07T10:49:09.198 DEBUG:teuthology.orchestra.run.vm06:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-07T10:49:09.217 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-07T10:49:09.217 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm06.local 2026-03-07T10:49:09.217 DEBUG:teuthology.orchestra.run.vm06:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-07T10:49:09.267 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-07T10:49:09.267 DEBUG:teuthology.orchestra.run.vm06:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-07T10:49:09.325 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-07T10:49:09.325 DEBUG:teuthology.orchestra.run.vm06:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-07T10:49:09.372 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-07T10:49:09.373 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-07T10:49:09.373 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-07T10:49:09.373 INFO:teuthology.orchestra.run.vm06.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-07T10:49:09.373 INFO:teuthology.orchestra.run.vm06.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-07T10:49:09.375 INFO:teuthology.orchestra.run.vm06.stderr: 83.5% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-07T10:49:09.376 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-07T10:49:09.378 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-07T10:49:09.378 DEBUG:teuthology.orchestra.run.vm06:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-07T10:49:09.425 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-07T10:49:09.427 DEBUG:teuthology.orchestra.run.vm06:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-07T10:49:09.472 INFO:teuthology.orchestra.run.vm06.stdout:kernel.core_pattern = core 2026-03-07T10:49:09.479 DEBUG:teuthology.orchestra.run.vm06:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-07T10:49:09.524 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-07T10:49:09.524 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-07T10:49:09.526 INFO:teuthology.task.internal:Transferring archived files... 2026-03-07T10:49:09.527 DEBUG:teuthology.misc:Transferring archived files from vm06:/home/ubuntu/cephtest/archive to /archive/irq0-2026-03-07_10:43:39-orch:cephadm:workunits-cobaltcore-storage-v19.2.3-fasttrack-5-none-default-vps/22/remote/vm06 2026-03-07T10:49:09.527 DEBUG:teuthology.orchestra.run.vm06:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-07T10:49:09.573 INFO:teuthology.task.internal:Removing archive directory... 2026-03-07T10:49:09.573 DEBUG:teuthology.orchestra.run.vm06:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-07T10:49:09.617 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-07T10:49:09.619 INFO:teuthology.task.internal:Not uploading archives. 2026-03-07T10:49:09.619 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-07T10:49:09.620 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-07T10:49:09.621 DEBUG:teuthology.orchestra.run.vm06:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-07T10:49:09.661 INFO:teuthology.orchestra.run.vm06.stdout: 258075 4 drwxr-xr-x 3 ubuntu ubuntu 4096 Mar 7 10:49 /home/ubuntu/cephtest 2026-03-07T10:49:09.661 INFO:teuthology.orchestra.run.vm06.stdout: 780978 4 drwxrwxr-x 3 ubuntu ubuntu 4096 Mar 7 10:48 /home/ubuntu/cephtest/mnt.0 2026-03-07T10:49:09.661 INFO:teuthology.orchestra.run.vm06.stdout: 780988 4 drwxrwxr-x 3 ubuntu ubuntu 4096 Mar 7 10:49 /home/ubuntu/cephtest/mnt.0/client.0 2026-03-07T10:49:09.661 INFO:teuthology.orchestra.run.vm06.stdout: 1046093 4 drwxrwxr-x 4 ubuntu ubuntu 4096 Mar 7 10:49 /home/ubuntu/cephtest/mnt.0/client.0/tmp 2026-03-07T10:49:09.661 INFO:teuthology.orchestra.run.vm06.stdout: 1046094 4 drwx------ 2 ubuntu ubuntu 4096 Mar 7 10:49 /home/ubuntu/cephtest/mnt.0/client.0/tmp/tmp.test_cephadm.sh.uibCgt 2026-03-07T10:49:09.661 INFO:teuthology.orchestra.run.vm06.stdout: 1046096 780 -rwx------ 1 ubuntu ubuntu 794661 Mar 7 10:49 /home/ubuntu/cephtest/mnt.0/client.0/tmp/tmp.test_cephadm.sh.uibCgt/tmp.cephadm.RxMHhF 2026-03-07T10:49:09.661 INFO:teuthology.orchestra.run.vm06.stdout: 1046095 4 drwx------ 2 ubuntu ubuntu 4096 Mar 7 10:49 /home/ubuntu/cephtest/mnt.0/client.0/tmp/tmp.test_cephadm.sh.c5UA8L 2026-03-07T10:49:09.662 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-07T10:49:09.662 INFO:teuthology.orchestra.run.vm06.stderr:rmdir: failed to remove '/home/ubuntu/cephtest': Directory not empty 2026-03-07T10:49:09.662 ERROR:teuthology.run_tasks:Manager failed: internal.base Traceback (most recent call last): File "/home/teuthos/teuthology/teuthology/run_tasks.py", line 160, in run_tasks suppress = manager.__exit__(*exc_info) File "/home/teuthos/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/contextlib.py", line 142, in __exit__ next(self.gen) File "/home/teuthos/teuthology/teuthology/task/internal/__init__.py", line 53, in base run.wait( File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 485, in wait proc.wait() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 161, in wait self._raise_for_status() File "/home/teuthos/teuthology/teuthology/orchestra/run.py", line 181, in _raise_for_status raise CommandFailedError( teuthology.exceptions.CommandFailedError: Command failed on vm06 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-07T10:49:09.662 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-07T10:49:09.665 DEBUG:teuthology.run_tasks:Exception was not quenched, exiting: CommandFailedError: Command failed on vm06 with status 1: 'find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest' 2026-03-07T10:49:09.666 INFO:teuthology.run:Summary data: description: orch:cephadm:workunits/{0-distro/ubuntu_22.04 agent/on mon_election/connectivity task/test_cephadm} duration: 238.41256260871887 failure_reason: 'Command failed (workunit test cephadm/test_cephadm.sh) on vm06 with status 1: ''mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=c00b45d1ae607078ce5f9bef6b691d18bc82a838 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh''' owner: irq0 sentry_event: null status: fail success: false 2026-03-07T10:49:09.666 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-07T10:49:09.682 INFO:teuthology.run:FAIL